My script basically some arithmetic calculation sequence based on pandas dataframe (extracted from excel). I already try put a whole code into the prompt yet still the model has similar behavior.
I got better response when i prompt it to response me snippet by snippet, but this will be expensive since the input token is prompted back recursively.
It will be good if the model can uses its max length of 4096 token length when user really needed.