





While all first episodes catch viewers off guard, Doscovery is a big jump, from the 2000s to the almost-2020s
And it definitely tries to do things differently. It’s all a connected story, episodes rarely are stand-alone, and moral messages are mixed inside the story
That said, as you go on, you’ll find wonderful ideas that would nave been awesome on the 90s too. Surely a few things will seem off, but the concepts are very startrek. At least to me.


It’s actually a song my Mexican friend knew, I didn’t even though I’m the Italian one xD


Che figata di serata!


Italian here, I had luck with my English teachers and my parents and then the internet encouraged me to learn extra


The needs of the one outweigh the needs of the many
And that one is me


Omg I rewatched that episode last week and I didn’t think of that, I assumed it was voiceover too, even thinking that is was slightly different than the normal Picard voice to fake it being mimicked 🫠


The sooner our society abandons individualism and goes back to community oriented structures, the faster these problems will disappear.
Mass media, mass religion, mass communication, they all included manipulation. From telling lies, to telling only part of the truth, to salience and propaganda. AI is the latest scary tool.
The issue is that we expect the individual to have the tools to defend themselves from this. That’s not how it works. We need human connection, small family communities, where members help each others on the various aspects of life, including following public discourse and politics. When everyone has their sense of belonging to a place of care, the value of money and of the mass manipulations it uses falls and fails to accomplish a thing.
Let’s also stop believing that people vote mainly based on how easy it is to manipulate them. People vote based on their culture, their experiences, their overall mood on how things are going. The turnout in New York should show that. No fake news on Mamdani had an effect that comes even close to people’s connection to a grassroots campaign, people’s dissatisfaction with Trump and billionaires, people’s cultural affiliation to a young immigrant that speaks of affordability, nurseries, and pride to be one’s true self against discrimination.


Time limits on apps where I tend to have arguments on politics
Mmmmm that sounds to me like some digital sound with a slight flanger. Like a guitar-ish sample.
LMMS has this Vibed instrument that starts from something pretty similar. You could play around with that, on a PC tho


He’d probably have a blast!
He’s always been easygoing and funny at conventions, there’s a lot of videos where he impersonates Patrick Stewart (Cap. Picard)


Straight to hen*
Yes, I don’t think it’s a matter of training.
The diffusion model generates pictures by starting on a canvas with random pixels, then it edits those pixel colours and carves the picture out of that chaos
To achieve an area with all the same colour, it would need to put very exact values on the last generation step.
It can be fixed easily with a very subtle lowpass filter, but that would be human intervention. The model itself will have a hard time replicating it
Two that I noticed are:
For drawings in the ghibli style, you can see noise on areas that should have all the same colour. That’s because of how the diffusion model works, it’s very hard for it to replicate lack of variation in colours. If fact that noise will always exist, it’s just more noticeable on simple styles.
For music, specifically with Suno, it tends to use the similar sounding instruments between different tracks of the same specifispecified genres, and those sounds might change during the track and never come back to their original sound (because it generates section by section of the track from start to end, the transformer model will feed the last sections back as input to generate the new ones, amplifying possible biases in the model)


It is conceivable.
For example, imagine a society like ours but where everyone, no matter their wealth, has to do essential jobs, taking turns. For example everyone in your city needs to be a garbage collector for one week every 2 years, or they need to work in hospitals to help clean patients for a week every year, maybe you need to work in the fields for a month every 2 years, basically all jobs that people only do because they can’t do anything less tiresome plus jobs that are now almost fully automated to produce essentials but still require some labour.
In that system, you’d always have enough workforce to give everyone enough food, homes, healthcare and education to live, while people might still work at secondary non-essential jobs, voluntarily, and gaining a bit more to have their fancy cars and yatch.
This is a conceptual society where, despite the possibility for individual differences, you don’t really have classes, because no matter if you were born in poverty or you are Elon Musk, you all have to take part on essential services equally.
I recently made a pdf with some of my notes on hints of AI in images and music, but I’m not sure how to send files here
It’s not easy ofc, and it will get harder with time, but I am convinced we can tell if trained a bit. Because there are clear differences in the creative processes between humans and machines, which will always result in different biases
With time I think we’ll learn to only trust people we have some social connection with, so we know they are real and they don’t use AI (or they use it up to a level acceptable to us)


You could try Google’s new NotebookLM if the legal writing is a book, or even just a long document
Otherwise just use any llm and ask step by step checking references


Imo the main difference would be that genAI models have been trained on a whole lot of art without consent, and the few privileged companies who are able to do this are making a ton of money (mainly by investors, not sure how much from paying users). Which is very extractive and centralised. Using others’ art to do memes at least is distributed and not that remunerative
Putting AI aside, if we see art used in a meme of a random shitposter, it feels different than a political party or a big corporation using that art to do meme propaganda/advertisement.
Another interesting field for this is YouTube poops. They use tons of copyrighted materials, from big movies to local youtubers to advertisement. I would consider that fair, but if instead a big television network had a program showing youtubers’ content without permission that’s another story
Another example: Undertale’s soundtrack being made with Earthbound’s sound effects and samples. If it weren’t an indie, especially if it was a big publisher using an indie’s sounds, it wouldn’t have been well received.
So back to AI, when it comes to a person using it for their own projects, the issue to me isn’t really using stolen art, but using a tool that was made with an extractive theft of art by a big corporation, rather than seeking collaboration with artists, using existing CreativeCommons stuff, etc.
We also have to keep the context in mind: copyright laws mainly serve big publishers, hardly ever it protects smaller creators from such big publishers, in any field. The genAI training race is based on a complete lack of interest in applying or at least discussing the law.
I’m glad to see tho that thanks to this phenomenon more and more people are seeing how IP doesn’t make any sense to begin with. Just keep in mind copyright and attribution are two different things.
Artichokes


I think generators have some kind of inherent style that we somehow learn to recognise
Like sure they have learned on thousands of styles for each type of image, and you have some control of the style through prompt, but one issue with the transformer decoder model (the principles of which back almost all genAI at this point) is that at each generation step it gets the stuff generated so far as input.
This feedback loop might induce repeated choices even on different prompts in the later stages of the generation. This is not apparent on images because they are seen all at once, but it is pretty evident on Suno (at least v3): later parts of different songs might share sounds. At least in my experiments making it generate EDM. I’m now able to spot the synth it often ends up creating.
In terms of pictures and videos, that might be a reason generated stuff are consistently uncanny across image types.