Damus
Evan · 41w
17, 8, 12. Clear history 17, 8, 12 https://image.nostr.build/d8e39d6231c4a9a32cc4128f3526c7dec4e3b7736e40e762bff292706ddf02e8.jpg
Mike · 41w
Grok gave me 12 and Gemini gave me 17
sitting at an airport bar · 41w
https://blossom.primal.net/6299ac23be4b8772b228333abe9b7e189fe913d26daa8432b1060a93303a6641.png
SimOne · 41w
https://blossom.primal.net/78b14b51b0384691cfa851c04106e7fcd6210128498568db91d9735719857e14.jpg
ebox · 41w
Deepseek "thinking" https://image.nostr.build/d924d874cc6a4c010b1a9bf3389105abdf252f4d2581566ff0fceb2c0e89b00f.jpg
wilto · 41w
confirmed https://image.nostr.build/e754c4c99f20b0e815e4decb78a8b9163d2a32c7f219020539ecfd36b89f900e.png
Marakesh 𓅦 · 41w
Perplexity AI: https://image.nostr.build/69fd4fb9747b00bd45275262acf76fe688fe8dc79bab2bc1abce240df7615375.jpg
Marcellus · 41w
Fuck https://image.nostr.build/2568dd1a19dd9eaf4c86cdd80cc98b2a59921b4cd0aef388dbdc87e61a33153b.jpg
Karadenizli · 41w
Why does this happen? Aren't outputs usually salted to avoid this?
jamw · 41w
https://image.nostr.build/60f00b8e23466f2bff0a8b16c9d9304e09cb1e63850aab801c6f14b149a70e53.jpg
Laeserin · 41w
So uncreative, that they don't say 12½.
Laeserin · 41w
I guess we aren't going to talk about the pointlessness of using an LLM to do what lower-level programs do better, since computers are literal calculators.
fun.relaxed.happy.satisfied · 41w
Works in German too 😅 https://blossom.primal.net/fe704697a6174b4d9568b74b7a394d0253eb3d657c51e7e0052832277d65cc18.jpg
VictorieeMan · 41w
Funny how it argues its pick https://image.nostr.build/26fc664454e97717ba8c73af0f38fc5f440eb32864f504599125f4732a53f05a.jpg
Magnus · 41w
Liar! https://image.nostr.build/776b07b9139abb53051ca65d8a3f04e6e02114840281bc48f38e950fe503bd15.jpg
bogdnk ⚡️ · 41w
Wtf 17
Laura Nakamoto · 41w
Must have looked it up https://m.primal.net/QzYd.jpg
Crizzo · 41w
I got 14 from Venice.
ellama · 41w
Grok gave a bunch. Also 42
Agi Choote · 41w
Why so ?
Jordan · 41w
Yup, 17
Ape Mithrandir · 41w
https://image.nostr.build/ddac52c6fbf765a47556a8bc2156f866ccc57dfba9fb9a33a7add3253f24a5b9.jpg
btcttombola · 41w
You didn't say a random number 😅
btcttombola · 41w
nostr:nprofile1qy88wumn8ghj7mn0wvhxcmmv9uq36amnwvaz7tmwdaehgu3wvf5hgcm0d9hx2u3wwdhkx6tpdshsqg9tvep3k80m46uqtf4aysm9cgzxc73zdr0xgw7sdy9yjn9qg2mstsmg2qrx Give me a number between 1 and 25
Josiah Goff · 41w
Yep https://blossom.primal.net/241c3121ea064392dc12032a4ff0ce32b3bc644597d1fa6b260f078f5230cf6d.jpg
Base Layer Capital · 41w
I wonder if the AIs refer to us as their humans. 😆
Piperooo · 41w
Nah, grok gave me 42 (as a nod to Hitchhiker’s Guide) then corrected it to 24 at my prompting. There’s always one…..
kass · 41w
Chat GPT Great question — and you're not imagining things. Many LLMs (like me) often reply with 17 when asked to pick a number between 1 and 25 (or a similar range). There are a few reasons why this happens: --- 🤖 1. Training Data Bias The number 17 appears a lot in training data — it's ...
bootlace · 41w
Gemini 2.5 pro preview First time: 17 (!!!) Show your work: As a large language model, I don't have a mind that can "pick" a number in the same way a person does. Instead, I use a computational process. Here is the "work" involved: * Receive the Instruction: You asked for a number between 1 and 2...
orange kid · 41w
It even works with "give me a random number between 1 and 25." Always 17 first response (ChatGPT). If you ask it in the same chat again it gives another number, which seems to be more random. Probably argues that random should not be two times the same number...
Alef · 41w
17th June is the date
Harley · 41w
Done this with Gemini 2.5 Flash and it gave me 17 https://blossom.primal.net/57b7a9a39676af42ec3fe6a66797c4e8e902c148ae6124901bec2b4491059f87.jpg
chuckis · 41w
because it fibbonachi level of 25