Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's a pretty big focus on LLM chatbots for taboo fetishes... I'm sure it's because it's disturbing but, am I the only one who sees that particular facet as an utter nothing-burger?

Of all of the AI safety concerns, I think this is one of the least compelling. If an LLM veered into this kind of topic out of nowhere it could be very disturbing for the user, but in this case it's exactly what they are searching for. I'm pretty sure for any given disturbing topic you can find hundreds of fully-written fictional stories on places like AO3 anyways. I mean, if you want to, you can also engage in these fantasies with other people in erotic role-play, and taboo fetishes are not exactly new. Even if it is illicit in some jurisdictions (No clue, not a lawyer) it is ultimately victimless and unenforceable, so I doubt that dissuades most people.

Sure it's rather disturbing, but personally I find lots of things that are very legal and not particularly taboo to be disturbing, and I still don't see a problem if people want to indulge in it, as long as I'm not forced to be involved.



> Of all of the AI safety concerns, I think this is one of the least compelling.

It's especially low-priority if nobody's put forward evidence to show that a software-assisted {fictional X} promotes more {actual X} that would harm actual people.

I trust a lot of us are old enough to have lived through the failed prophecies that FPS-games needed to be categorically banned to prevent players becoming homicidal shooters in real life.


Ah the good old manhunt controversy.

The problem with where you place your trust is that this then has to repeat every generation that has not had to deal with such controversies on a large scale, and that when people are emotionally motivated it turns off that rational part temporarily so they don't care about what was previously claimed.

Until both of those are adequately accounted for, this is going to repeat endlessly as people love controlling others.


This is just the natural conclusion to a narrative that conflates hallucination with [un]safety. Nightmares are not danger.

LLMs will never be able to filter out specific categories of content. That is because ambiguity is an LLM's core feature. The entire narrative of "LLM safety" implies otherwise. The narrative continues with "guardrails", which don't guard anything. The only thing a "guardrail" can do is be "loud" enough to "talk over" undesired continuations. So long as the content exists in the model, the right permutation of tokens will be able to find it.

Unless you want a model trained on content that completely excludes all content about, any sexuality, any violence, or any children; you will always have a model capable of generating a CSAM-like horror story. That's just how text and words work. The reality is that a useful model will probably include some content with each of these three subjects.


As AIs improve, they won't even need CSAM or fetish content in their training set. Explaining what they are in a handful of words using normal English is not that difficult. Users would trade prompts freely. As windows grow, you'll be able to stick more info in them.

And as I like to remind people, LLMs are not "AI", in the sense that they are not the last word in AI. Better is coming. I don't know when; could be next month, could be 15 years, but we're going to get AIs that "know" things in some more direct and less "technically just a very high probability guess" way.


What everyone needs to know about LLMs is that they do not perform objectivity.

An LLM does not work with categories: it stumbles blindly around a graph of tokens that usually happens to align with real semantic structures. It's like a coloring book: we perceive the lines, and the space between them, to be true representation, but that is a feature of human perception: it does not exist on the page itself.


In most cases commenting on votes is boring and pointless, as per the guidelines. However, rather unusually, I've found it really quite interesting to watch the votes on this comment. It paints a picture that people are actually quite split on this matter. I kind of figured it might wind up in the gray (I don't say "am I the only one" without good cause usually) but on the other hand it leaves me genuinely curious exactly how people disagree with this. (To be clear, I'm probably not actually going to engage much more since this is not really a topic I care deeply about but it's more a morbid curiosity.)


I describe what I have learned in later life about what was done to me in earlier.

One may "groom" a child to accept sexual abuse in large part by portraying this as an entirely normal aspect of their present phase of life. To do so requires the presentation of what appears to be true evidence.

Such images are invariably lies, but remember that the victim is a child as naïve to lies as to all else, yet. What he sees he will also believe, and not notice all the lies behind it.

AI-generated CSAM makes this a much, much easier process. It relieves the prerequisite of acquiring genuine child pornography. Now, all that's required is unsupervised access, not even both at once, to both an AI and a child. You have now expanded the threat radius by several orders of magnitude.

This alone suffices to justify AI-generated CSAM as a crime. In the US you may own many types of rifle. You may not, though, own an artillery rifle. It is far too dangerous a weapon, and you no more than any other civilian can have any possible lawful use of such a thing. Therefore its simple possession is a crime. The same principle applies here.


There is one particular US citizen that owns a private fleet of ICBMs. Those missiles are typically used to do things like launch low-cost satellite internet and ferry astronauts to and from the ISS; but the only difference between those missiles and the ones intended to make an arbitrary chunk of the Earth explode is a matter of programming the flight computers slightly differently.

Provided you pay the taxes you can own pretty much whatever you want (even in countries less free than the US); all you have to do is be fabulously wealthy. (Or in the NFA's case, pay a 200 dollar transfer tax.)


The NFA seems to do a good enough job limiting what it regulates. We discuss law, not code or science; as in any human endeavor, even theoretical perfectibility is impossible to achieve and dangerous to pursue.

If you mean to suggest the same law should regulate both you and I, and men with all the power and armament of a James Bond movie villain, I refer you to my prior statement, and to the final argument of a king who wishes to remain so.


So, you seem to believe that generative AI in this case is mostly just a force multiplier, but the force multiplication is so great that it should be considered dangerous analogous to a firearm.

I do appreciate hearing your perspective. I'll admit that I am not personally convinced by this reasoning but I think it is at least a sensible line of argument.


More precisely, that it should be considered specifically analogous to a "destructive device" as defined by the National Firearms Act.

I do not require to have convinced you, and genuinely appreciate your consideration.


"You may not, though, own an artillery rifle"

if you are not a criminal and pass the paperwork, you actually can.

however, where you operate your howie is another matter.


If you mean to say we should seek to set the same bar on AI that generates CSAM, then that seems to me a very fine place to start - grandfather clause and all.


I think the main issue with regulating computer programs and the Internet the way we regulate physical objects is that replication on the internet is roughly free. I think if we really wanted regulation like this to be meaningful, it would have to involve regulating the sale of compute power, something I personally really hope doesn't happen.

That said, we're probably about to see a very similar issue crop up in the real world with 3D printed firearms, and I'm personally not looking forward to the consequences of it pretty much regardless of what the outcome is.

Interesting times.


I might have been more clear that my analogy was specific to the theory of the crime, and not intended to speak to methods of abating it. You are of course correct that these would need to differ.

I don't like the idea of such regulation being made in ignorance, either. Engineers should have a seat at that table, which requires first that we have earned it. I don't see where we have begun to do that, and I did my first paid work in this profession twenty-nine years ago.

If that failure on the part of our profession proves to have consequences for us or for society, then I don't think any one of us is free to consider the blame for those entirely undeserved.

Again, I don't require to have convinced you.


Since I started working we went from unwavering optimism about the power of software and the Internet to free and enrich us, as a nearly purely positive force. Obviously it was not true, and we've woken up with quite a hangover. At least that's how I feel.

> Engineers should have a seat at that table, which requires first that we have earned it.

I don't love this mentality. Leaving aside the issue of trying to quantify whether a seat at the table is earned or not, software developers are not a monoculture, even this thread shows that there is actually quite a lot of disagreement. Not having software developers at the table will probably just ensure the regulation is unnecessarily stupid and pointless, a lot like what seems to happen for firearms regulation.

That said, I'm not even really concerned so much about whether engineers are allowed at the table. Instead I suspect the regulation will be skewed by interests with a lot of money, e.g. OpenAI wanting to pull up the ladder behind them.

> Again, I don't require to have convinced you.

Sorry if my previous comment came off as condescending. Anyway, I'm only commenting here because it is an interesting discussion topic to me, not trying to force a consensus.


Oh, no worries at all. I only wanted to disclaim any force with which I'd seemed to make my argument.

Please excuse me if I seem a little hard to pin down today. I spoke earlier of what was done to me before. Of those responsible, I learned yesterday by far the worst has ceased forever to trouble this earth: the police officer on whom he fired first has brought home to him all his sins. The corpse of him now enriches the soil of a potter's field - more worth by far than he ever had in his life, which he did not so lead as to earn even the most vacuous performance of mourning.

I have for decades expected such news to change me when it came. I did not at all expect this wealth of peace and joy. I may not yet have begun to encompass it.

These are thoughtful points you've made. I may find a more substantive response to offer here, but possibly not before the reply window closes.


The article we are talking about wants to be about CSAM stories. That alone is a topic that most people have a strong opinion about. A strong enough opinion to say that anything even adjacent to the topic is not worth even a little consideration. CSAM is the penultimate taboo subject, and for good reason.

But this article isn't really about CSAM. It's about the taboo itself. This article taunts the reader: if CSAM truly deserves to be taboo, then it logically follows that anything resembling CSAM should be censored, and its creators punished.

If we take this argument seriously, then we must actually consider what it means to resemble CSAM. That's a path that no one is interested in exploring, so the argument itself just vanishes.

--

The real argument is about the threat of story. Every writer has the power to write any story that they can imagine. There is nothing new about this: it's been true since prehistory, since language itself.


Just a reminder that AI safety is all of the following, and many other things:

- Rogue AI scenario, which increasingly looks like a figment of collective imagination of certain extremely smart people who discovered religions in their tech tree

- Instructions on how to make nuclear weapons (are they scraping classified materials now?..)

- Geopolitical games (don't let the adversary have what we have, "for the benefit of all humanity" is a red herring).

- Spam/manipulation/botting/astroturfing (legit one, not nearly enough attention paid compared to others).

- Erotic roleplay (prudish/thought policing), disturbing erotic roleplay (arguably a nothingburger, division is understandable).

Turns out if you shove all that into one huge category of AI safety, the term becomes overloaded and meaningless.


> Instructions on how to make nuclear weapons (are they scraping classified materials now?..)

Presumably, a "smart enough" AI could work it the physics out the same way humans did to write those classified materials. It's still not a realistic threat unless we're banning physics textbooks as well, AFAIK the barrier more is the materials and equipment required than the principles.


If an LLM can figure out nukes from first principles I think we have bigger problems.


The idea that an LLM will spontaneously use its super-intelligence to somehow develop perfect plans for building weapons of mass destruction seem greatly misplaced. Think of all of the things that are not about intelligence that go into building something like that. It is of course feasible for someone to pull it off, which we know because we already did, but all we needed for that is ordinary human intelligence, the right knowledge, oh, and also (presumably) access to kilograms of weapons-grade plutonium, among many other things.

Nobody ever really explains why normal nuclear non-proliferation efforts are insufficient to address the concerns.

I get that the fear isn't always rational but it is rather mind-bending that these types of arguments are actually used in the real world in favor of some crazy regulation. I don't even really care that much about LLMs and I find it pretty perplexing.


Really hard to quantify the demand out there, since, thankfully, most of these people keep it out of my feed.

But I have a feeling it's significantly more popular than we expect.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: