pips
- 43 Posts
- 7 Comments
pips@lemmy.filmto
World News@lemmy.ml•Smoking age should rise from 18, by one year every year - Rishi SunakEnglish
21·2 years agoThere is a legal, regulated, mostly safe method to buy cigarettes. It is inaccessible if you are under a certain age, but only the seller/provider is punished for violating regulations. It’s okay to have restrictions on what children can consume.
While current laws on illegal drugs do not work, arguing against any regulation whatsoever is similarly silly, the laws obviously work. Smoking rates have dramatically declined since those laws and public education campaigns began.
pips@lemmy.filmto
Privacy@lemmy.ml•ASstechnica : shittiest cookie consent popup champion everEnglish
39·2 years agoWe can’t all be not American.
pips@lemmy.filmto
Politics@beehaw.org•VA started the school year with new anti-trans policies. The state's largest district won't comply. - LGBTQ NationEnglish
12·2 years agoLook at what Texas did to HISD: took it over claiming “poor outcomes” despite it being one of the best districts in the state. I’m not sure how it works in Virginia but they may try something similar.
pips@lemmy.filmto
Technology@beehaw.org•Sarah Silverman and other authors are suing OpenAI and Meta for copyright infringement, alleging that they're training their LLMs on books via Library Genesis and Z-LibraryEnglish
1·2 years agoYou’re making a hasty generalization here
I’m really not, though I’ll readily admit I’m simplifying things. An LLM can only create something it’s been given. I guess it can generate a string of characters and assign a definition to it, but it’s not really intentional creation. There are many similarities between how a human generates something and how an LLM does, but to argue they’re the same radically oversimplifies how humans work. While we can program an LLM, we literally do not have the capability to replicate a human brain.
For example, can you tell me what emotions the LLM had when it produced the output it did? Did its physical condition have any effect? What about its past, not just what it has learned but how it was treated? What is its motivation? A human response to anything involving creativity factors in many things that we aren’t even consciously aware of, and these are things an LLM doesn’t have.
The study you’re citing is from Google, there’s likely some bias and selective reporting. That said, we were talking about creativity, not regurgitating facts or analyzing data. I think it’s universally accepted that as the tech gets better, it’s preferable to have a computer make the first attempt at a diagnosis, especially for a scan or large data analysis, then have a human confirm.
For the remix example, don’t forget that samples get attribution. Artists credit what they sampled and get called out when they don’t. I’m actually unclear as to whether an LLM actually can cite to how it derived its output just because the coders haven’t revealed if there’s some sort of derivation log.
pips@lemmy.filmto
Technology@beehaw.org•Sarah Silverman and other authors are suing OpenAI and Meta for copyright infringement, alleging that they're training their LLMs on books via Library Genesis and Z-LibraryEnglish
1·2 years agoAn LLM can’t make something original, it can only make something derivative. But that derivative work isn’t the same as when a human makes a derivative work because a human isn’t writing each word or phrase based on the likely “correct” next word or phrase through an algorithmic process. What humans do is magnitudes more complex, though it can at times also be accidental or intentional plagiarism.
In short, an LLM’s output is necessarily a string of preexisting human inputs. A human’s output, while it can be informed by and reference other human inputs, doesn’t have to replicate preexisting human inputs and can be an original analysis. The AI that is publicly available is not sophisticated enough to be more than fancy predictive text.
pips@lemmy.filmto
Technology@beehaw.org•Sarah Silverman and other authors are suing OpenAI and Meta for copyright infringement, alleging that they're training their LLMs on books via Library Genesis and Z-LibraryEnglish
1·2 years agoBut when the answers aren’t original thoughts but regurgitations of other peoples’ thoughts about the book, then it’s plagiarism. LLMs can’t provide original output, only variations on what people have made available (whether legally or not). The answer might not even be correct or make any sense. It’s just predictive text to a crazy degree.
When you copy someone’s work without attribution, that’s plagiarism. When your output is only possible because of someone else’s work over which they own copyright and the output replicated the copyrighted material, that’s copyright infringement.




















Israel has labeled BDS, the actual peaceful option, terrorism.