sfba.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance for the San Francisco Bay Area. Come on in and join us!

Administered by:

Server stats:

2.9K
active users

At work, almost everything is very optimistic about generative AI (LLMs but not just). We all talk about the potential of it in the best possible outcome. So on here I try to balance that by following people who offer critiques—I don’t mean doomy AGI outcomes but people who point out things like maybe the models we’re so hyped about don’t actually have the potential to improve to the point of true usefulness in all these use cases

It’s been easier for me to grasp the ethical arguments about the training data, algorithmic discrimination, and the harms of people thinking a chatbot is a human. Similarly the economic arguments about replacing skilled workers (eg online support agents) with bots trained on their previous work.

But I’m out of my depth when it comes to the actual mechanism of LLMs and diffusion models. So I have no grounded intuitions about their potential growth.

So I follow smart critics

Dysmorphia💛🤍💜🖤

At a previous job, I set up to lint our tech docs (in a docs as code publishing flow) and I loved the power that the natural language processing (NLP) and customized styles gave me to standardize the docs and check for mistakes.

When I started playing with LLM chatbots, I immediately thought they could be a more convenient application of that. But so far, nope.

#nlp#llm#GenAI

LLMs as currently available (to me) don’t have the grunt to check vast docsets the way linters can. They don’t consistently apply rules. I had hoped they could “learn” a style and output a style guide, but no, their style of “learning” relies too much on “feels like” and is not capable of extracting the principles. That is still a human task.

And here my intuition fails me. Is this a hard limitation of LLMs? Or will it be possible to extend them with principles?