Data at Depth

Data at Depth

Share this post

Data at Depth
Data at Depth
Effective GPT-4 Data Science Prompting: Building Guardrails for Reliable Output

Effective GPT-4 Data Science Prompting: Building Guardrails for Reliable Output

Eliminating placeholder data and ensuring usable Python code

John Loewen's avatar
John Loewen
Jan 10, 2025
∙ Paid
3

Share this post

Data at Depth
Data at Depth
Effective GPT-4 Data Science Prompting: Building Guardrails for Reliable Output
Share

As a Computer Science professor I have been using GPT-4 for over a year now to assist me with my data visual creation workflow.

Recently I have noticed that GPT-4 is showing great improvements at how it handles data visualization requests.

However, there are still some daily frustrations that I encounter within my GPT-4 prompting workflow:

  1. GPT-4 often loses its “train of thought” from the start to end of a conversation, particularly as the responses become more complex.

  2. GPT-4 “makes up” data (and data field names) if it cannot find the actual data or field names that it needs. It calls it “placeholder” data.

To minimize these two issues (and my overall frustration), I have a tool and a method that I now follow every time for my prompting workflow.

Keep reading with a 7-day free trial

Subscribe to Data at Depth to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 John Loewen
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share