Effective GPT-4 Data Science Prompting: Building Guardrails for Reliable Output
Eliminating placeholder data and ensuring usable Python code
As a Computer Science professor I have been using GPT-4 for over a year now to assist me with my data visual creation workflow.
Recently I have noticed that GPT-4 is showing great improvements at how it handles data visualization requests.
However, there are still some daily frustrations that I encounter within my GPT-4 prompting workflow:
GPT-4 often loses its “train of thought” from the start to end of a conversation, particularly as the responses become more complex.
GPT-4 “makes up” data (and data field names) if it cannot find the actual data or field names that it needs. It calls it “placeholder” data.
To minimize these two issues (and my overall frustration), I have a tool and a method that I now follow every time for my prompting workflow.
Keep reading with a 7-day free trial
Subscribe to Data at Depth to keep reading this post and get 7 days of free access to the full post archives.