sfba.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance for the San Francisco Bay Area. Come on in and join us!

Server stats:

2.4K
active users

#generativeAI

192 posts109 participants7 posts today

"Corpora.AI is a tool that helps professionals research by “creating precise, distilled insights from the breadth and depth of global content.” As the company’s CEO and an expert in the AI field, Morris is well-aware of the financial toll such tools take on companies… and who really profits from them.

“They launched this type of service and no matter who you are, it’s fun to play with,” Morris began. “You see people creating images just for the fun of it. They share it with their friends. Everyone gathers around at their desk and they all think that’s a funny way to look at this person with a nice avatar that’s been created, or a real looking person but it’s based on them, and so it’s fun.

“There’s a hype factor that gathers momentum very quickly. I had a phone conversation just last night about this and someone said, ‘Yeah, these guys are burning through GPUs doing this. Literally, they’re burning through GPUs, they’re almost catching fire.’ They’re having to run that hard to do it, and that’s taking away the capacity,” he explained.

“The same sort of models that we’re running for doing GPT and those sorts of things running on the same hardware and probably in the same cloud-based server form. So, all of a sudden now we’re putting another demand on the GPUs.”"

dexerto.com/entertainment/ai-c

A photo of an AI-Generated image in the style of Studio Ghibli alongside a photo of Miyazaki looking stressed.
Dexerto · AI CEO claims ChatGPT is “burning” through a fortune because of Ghibli trend - DexertoIn late March 2025, social media became inundated with a viral new trend thanks to ChatGPT’s image generator going public for all users, which took the internet by storm almost immediately. Folks flocked to the AI with requests to recreate photos of themselves as Studio Ghibli characters in a fad that was impossible to avoid...

Big tech companies want total control but opt-out should be the way to go:

"OpenAI and Google have rejected the government’s preferred approach to solve the dispute about artificial intelligence and copyright.

In February almost every UK daily newspaper gave over its front page and website to a campaign to stop tech giants from exploiting the creative industries.

The government’s plan, which has prompted protests from leading figures in the arts, is to amend copyright law to allowdevelopers to train their AI models on publicly available content for commercial use without consent from rights holders, unless they opt out.

However, OpenAI has called for a broader copyright exemption for AI, rejecting the opt-out model."

thetimes.com/uk/technology-uk/

The Times · AI giants reject government’s approach to solving copyright rowBy Georgia Lambert
#AI#GenerativeAI#UK

As the sun dipped below the horizon, casting a warm orange glow over the city, the people of Echo 6 gathered at the edge of their world to watch it set. It was a moment of peace in an otherwise chaotic existence, and one that they cherished deeply.

#Flux#FluxDev#AIArt

MM: "One strange thing about AI is that we built it—we trained it—but we don’t understand how it works. It’s so complex. Even the engineers at OpenAI who made ChatGPT don’t fully understand why it behaves the way it does.

It’s not unlike how we don’t fully understand ourselves. I can’t open up someone’s brain and figure out how they think—it’s just too complex.

When we study human intelligence, we use both psychology—controlled experiments that analyze behavior—and neuroscience, where we stick probes in the brain and try to understand what neurons or groups of neurons are doing.

I think the analogy applies to AI too: some people evaluate AI by looking at behavior, while others “stick probes” into neural networks to try to understand what’s going on internally. These are complementary approaches.

But there are problems with both. With the behavioral approach, we see that these systems pass things like the bar exam or the medical licensing exam—but what does that really tell us?

Unfortunately, passing those exams doesn’t mean the systems can do the other things we’d expect from a human who passed them. So just looking at behavior on tests or benchmarks isn’t always informative. That’s something people in the field have referred to as a crisis of evaluation."

blog.citp.princeton.edu/2025/0

CITP Blog · A Guide to Cutting Through AI Hype: Arvind Narayanan and Melanie Mitchell Discuss Artificial and Human Intelligence - CITP BlogLast Thursday’s Princeton Public Lecture on AI hype began with brief talks based on our respective books: The meat of the event was a discussion between the two of us and with the audience. A lightly edited transcript follows. Photo credit: Floriaan Tasche AN: You gave the example of ChatGPT being unable to comply with […]