This sounds like it could turn away tens of thousands of potential Nostr users who'd have liked to use Nostr primarily for vibe coding. I mean if they decided to take this advice.
Question for all Ethereum enthusiasts around here:
Say you have a smart contract to which users can submit transactions for which they have to pay gas fees. You'd like to offer them the service of paying these gas fees on their behalf so they don't have to go go through the trouble of having to get their own ETH. But it's also important that msg.sender always be the address of the privkey with which the user signed the underlying transaction (rather than say the address of the payor). Is there an (easy?) way to do this?
You don't need key signing parties to bind pubkeys to their owners. What you need are pubkeys that have an association with their owners which remains stable over the long term.
That's because, given enough time, you'll always have plenty of opportunity to observe all kinds of cues and signals that will tell you who is behind a pubkey, at least to the extent to which the owner wants to reveal that information to you.
For example, over time you'll be able to tell whether the owner is a bot or a human, what their interests are, their writing style, what communities they are part of etc. And if they want you to know their name or other "personally identifiable information," there will be almost always be easy mechanisms by which they can convince you that they are who they say they are.
It seems very unlilely that AI's ability to impersonate will change this, either now or in the future.
They could send me an email at [email protected] or just reply on this thread if they prefer. It's about turning the prototype at www.inkan.cc into something that runs well (if they want to take a look at the current version, I can put their pubkey on the allowlist for the identity features). If they think they may be interested, I'm happy to discuss with them.
Back when I dabbled in the philosophy of mind, I was pretty open to the idea of machines developing consciousness. The philosophical arguments for that seemed fairly strong.
But now that AI has actually arrived, I just can't bring myself to to believe at all that I'm dealing with anything conscious here. They just don't seem to have the right internal structure.
"empty inside" is a bit strong, though, there is a lot going on inside an LLM.