Be careful evaluating code from LLM based tools, as there are several avenues for malicious users to inject output: https://github.com/greshake/llm-security