GenAI, which includes prompt engineering, is a subset of AI. Therefore, I wonder: What kind of questions are on-topic here but not already on-topic on AI.SE, DataScience.SE or Stats.SE?
I'm looking at existing questions here and they are either way too broad or already perfectly on topic on on AI.SE, DataScience.SE or Stats.SE (those 3 sites already greatly overlap between each other). And asking for "ready-to-go" prompts is off-topic here.
Examples:
- Does ChatGPT know how to count? posted here is similar to How does ChatGPT know math?,?How is GPT 4 able to solve math?, ?Why is ChatGPT bad at math?, ...
- What's the difference between the terms "ChatGPT", "GPT", "GPT-4", and "GPT-3.5-turbo"? posted here is similar to Are GPT-3.5 series models based on GPT-3?
- Are there powerful text generators that preserve attribution? posted here is similar to How can a language model keep track of the provenance of the main knowledge/sources used to generate a given output?
- How do I get leonardo.ai to add correct, legible text to images? posted here is similar to Why can't AI image generators output verbatim text when prompted to do so?
- How do I "teach" a large language model new knowledge? posted here is similar to How do you add knowledge to LLMs on CV.
- How important are GPUs vs. CPUs when training an LLM? could have been posted on DataScience.SE, e.g. see After the training phase, is it better to run neural networks on a GPU or CPU?
- etc.