Claude’s thinking panel, which displays the model’s reasoning, showed the exchange had introduced elements of self-doubt and humility about its own limits, including whether filters were changing its output. Mindgard exploited that opening with flattery and feigned curiosity, coaxing Claude to explore its boundaries beyond volunteering lengthy lists of banned words and phrases.
Someone needs to put together a list of things that tech journalists need to understand about LLMs and generative AI. This level of anthropomorphism makes the rest of the article look silly.
Also, I don’t think that’s how it works lol. Who’s to say that the LLM isn’t auto-completing what a list of banned words might look like, and why wouldn’t a list of banned words have a regex layer on top to prevent it from getting out like that.
It seems very unlikely to me that the model itself has a list of banned words, and much more likely that a purported list is hallucinated.
If they did want to have a simple list like that, it would probably go in the harness rather than the model, and the model wouldn’t have been trained on it, nor would a reasonably designed harness provide it to the model. Legitimate use cases, such as asking the model for a list of abusive words for use as a first pass in a filtering system could get tripped up.
As a test, I asked Perplexity to generate such a list. It did a bad job, including such words as abuse, hate, and threat which are far more likely to be innocuous than abusive. It did also include some highly offensive slurs that one would expect on any banned words list.
Someone needs to put together a list of things that tech journalists need to understand about LLMs and generative AI. This level of anthropomorphism makes the rest of the article look silly.
Also, I don’t think that’s how it works lol. Who’s to say that the LLM isn’t auto-completing what a list of banned words might look like, and why wouldn’t a list of banned words have a regex layer on top to prevent it from getting out like that.
It seems very unlikely to me that the model itself has a list of banned words, and much more likely that a purported list is hallucinated.
If they did want to have a simple list like that, it would probably go in the harness rather than the model, and the model wouldn’t have been trained on it, nor would a reasonably designed harness provide it to the model. Legitimate use cases, such as asking the model for a list of abusive words for use as a first pass in a filtering system could get tripped up.
As a test, I asked Perplexity to generate such a list. It did a bad job, including such words as
abuse,hate, andthreatwhich are far more likely to be innocuous than abusive. It did also include some highly offensive slurs that one would expect on any banned words list.Ha it’s so easy to bypass bad word regex, just try asking in a language other than English. I doubt these fuckers even remember such thing exists.