Open post
"old school #hacker", Googler, owner of derg.nz and dragonhive.net. #IT person and #tech nerd of all kinds professionally; #carmodding and #hardware #tinkerer, #gardener, #developer, #cook, #writer, #biking, and a lot more things in spare time, when motivated. Feel free to ask me anything! Very likely to follow back if you are #furry or like tech stuff! exception of bots, crypto junk, etc NSFW account: https://mastodon.derg.nz/@AnthropyAD (CW: EXPLICIT/NSFW) Lore/SFW RP account: https://mastodon.derg.nz/@anthropylore
mastodon.derg.nz
"old school #hacker", Googler, owner of derg.nz and dragonhive.net. #IT person and #tech nerd of all kinds professionally; #carmodding and #hardware #tinkerer, #gardener, #developer, #cook, #writer, #biking, and a lot more things in spare time, when motivated. Feel free to ask me anything! Very likely to follow back if you are #furry or like tech stuff! exception of bots, crypto junk, etc NSFW account: https://mastodon.derg.nz/@AnthropyAD (CW: EXPLICIT/NSFW) Lore/SFW RP account: https://mastodon.derg.nz/@anthropylore
mastodon.derg.nz
@anthropy@mastodon.derg.nz
·
1d ago
LLM code findings
Sensitive
LLMs are models, and they want to take on a certain shape. The more you try and push them out of that, the more you'll get issues later.
The reason why you get issues is because while you can tell them how to do something, *they will not remember*, even if you put it in something like claude.md or memory.md, they actively have to actually read that file to 'remember'.
As things fall out of the context, they'll fall back on the model direction instead of what you told them
#AI