cross-posted from: https://nom.mom/post/121481
OpenAI could be fined up to $150,000 for each piece of infringing content.https://arstechnica.com/tech-policy/2023/08/report-potential-nyt-lawsuit-could-force-openai-to-wipe-chatgpt-and-start-over/#comments
It doesn’t. The original data is nowhere in its dataset. Words are nowhere in its dataset. It stores how often certain tokens (numbers computationally equivalent to language fragments; not even words, but just a few letters or punctuation, often chunks of words) are found together in sentences written by humans, and uses that to generate human-sounding sentences. The sentences it returns are thereby a massaged average of what it predicts a human would say in that situation.
If you say “It was the best of times,” and it returns “it was the worst of times.”, it’s not because “it was the best of times, it was the worst of times.” is literally in its dataset, it’s because after converting what you said to tokens, its dataset shows that the latter almost always follows the former. From the AI’s perspective, it’s like you said the token string (03)(153)(3181)(359)(939)(3)(10)(108), and it found that the most common response to that by far is (03)(153)(3181)(359)(61013)(12)(10)(108).
Sorry, wrong reply
Impressioning and memorization, it memorised the impression (“sensation”) of what it’s like to have the text in the buffer: “It was the best of times,” and “instinctively” outputs it’s impression “it was the worst of times.” Knowing each letter it added was the most “correct” rewarding.