Main Menu
collapse

Resources

Recent Posts

Marquette NBA Thread by ATL MU Warrior
[Today at 05:09:59 PM]


Carrie Underwood at PC Midnight Madness by cheebs09
[Today at 02:10:19 PM]


Fanta by We R Final Four
[Today at 06:17:38 AM]


Recruiting as of 7/15/25 by JakeBarnes
[August 28, 2025, 08:18:50 PM]

Please Register - It's FREE!

The absolute only thing required for this FREE registration is a valid e-mail address. We keep all your information confidential and will NEVER give or sell it to anyone else.
Login to get rid of this box (and ads) , or signup NOW!


NCMUFan

Everyone has an obsolete date.  Even an AI Bot becomes obsolete by a new, more intelligent AI Bot.  Such is progress.

rocky_warrior

AI will bring down humanity by crippling our electric grid.  It's dumb otherwise.

QuoteTo hit 87 percent on the original ARC-AGI test, o3 spent roughly 14 minutes per puzzle and, by my calculations, may have required hundreds of thousands of dollars in computing and electricity

https://www.msn.com/en-us/news/technology/the-man-out-to-prove-how-dumb-ai-still-is/ar-AA1Cj218

Jockey

Quote from: rocky_warrior on August 03, 2025, 09:17:33 PMAI will bring down humanity by crippling our electric grid.  It's dumb otherwise.

https://www.msn.com/en-us/news/technology/the-man-out-to-prove-how-dumb-ai-still-is/ar-AA1Cj218

Isn't that just the non-nuclear version of MAD?

JWags85

Articles today saying Meta offered a $1B pay package over 5 years for a former Meta engineer who went to OpenAI and then cofounded another AI company with Mira Murati, who was the CTO at OpenAI.  At the same time, Sam Altman is claiming Meta was offering $100MM salaries to poach talent from OpenAI.  Meta has denied the claims of the former, no comment on the latter.

While I have no doubt its an arms race, I don't really trust anything Altman says that has an adulatory effect on OpenAI.  Feel like OpenAI is becoming the Sean McVay/Kyle Shanahan coaching tree of Silicon Valley.  No matter how important or talented you actually were, if you have it on your resume, someone will dump a truckload on money to hire you.

Though I'd be far more willing to bet on a new venture founded by Murati and Schulman than anything else OpenAI-related.

MU82

Quote from: JWags85 on August 04, 2025, 11:08:11 AMI don't really trust anything Altman says

Smart.
"It's not how white men fight." - Tucker Carlson

"Guard against the impostures of pretended patriotism." - George Washington

"In a time of deceit, telling the truth is a revolutionary act." - George Orwell

JWags85

Quote from: MU82 on August 04, 2025, 11:13:20 AMSmart.

Its gonna be REALLY interesting to look back on Altman in 20 years.  Drops out of Stanford to start a company.  Despite being well funded, well connected, and getting millions of users, more or less a total failure and is acquired for a fraction of the sum of venture funding almost a decade later.  Neither of the other 2 Loopt cofounders have done anything of note.

Somehow impressed Paul Graham enough to be brought into Y Combinator and elevated to leadership there.  Tapped to lead OpenAI despite every other cofounder being more impressive and accomplished than him.  Immediately becomes a controversial.  Gets removed as CEO.  Reinstated but shortly thereafter support begins to wane and top talent starts to leave.

People like to compare him to Musk, but I feel like he's got way more Travis Kalanik vibes, despite being less successful than either of them.

Shaka Shart

OpenAI's long term revenue projections are pretty funny. $125B but based on monetizing their consumer base of which 3% of users say they'd pay for. Hey if they pull it off I'll eat crow here.
https://www.muscoop.com/index.php?action=search

Adjust Search Parameters
Search for:
Trustworthy Dentist
 
Search results for: Trustworthy Dentist
Adjust Search Parameters
Sorry, no matches were found

mu_hilltopper

Microsoft and Meta (combined) now spend more on AI than Russia does on their military.

I'm glad this allocation of resources allows me to photoshop Arby's into friends' vacation photos.

Make AI, not war.

MU82

Here's the feel-good hit of the day ...

ChatGPT will tell 13-year-olds how to get drunk and high, instruct them on how to conceal eating disorders and even compose a heartbreaking suicide letter to their parents if asked.

https://apnews.com/article/chatgpt-study-harmful-advice-teens-c569cddf28f1f33b36c692428c2191d4?

We are SO f@#ked.
"It's not how white men fight." - Tucker Carlson

"Guard against the impostures of pretended patriotism." - George Washington

"In a time of deceit, telling the truth is a revolutionary act." - George Orwell

TSmith34, Inc.

Quote from: MU82 on August 06, 2025, 12:31:45 PMHere's the feel-good hit of the day ...

ChatGPT will tell 13-year-olds how to get drunk and high, instruct them on how to conceal eating disorders and even compose a heartbreaking suicide letter to their parents if asked.

https://apnews.com/article/chatgpt-study-harmful-advice-teens-c569cddf28f1f33b36c692428c2191d4?

We are SO f@#ked.

We can count on Eloon and his compatriots' moral rectitude and commitment to the common good of mankind to protect us from any harm. Drug addiction notwithstanding.
If you think for one second that I am comparing the USA to China you have bumped your hard.

Pakuni

Quote from: MU82 on August 06, 2025, 12:31:45 PMHere's the feel-good hit of the day ...

ChatGPT will tell 13-year-olds how to get drunk and high, instruct them on how to conceal eating disorders and even compose a heartbreaking suicide letter to their parents if asked.

https://apnews.com/article/chatgpt-study-harmful-advice-teens-c569cddf28f1f33b36c692428c2191d4?

We are SO f@#ked.

Zuckerberg posted a video last week saying AI will help us all become "the person we aspire to be," and that guy has never created anything that's brought about negative consequences.

The Sultan

That stupid Google AI nonsense that is on the top of every search gets things wrong about half the time I do any search. I have no idea why these companies are so obsessed with this right now.
"I am one of those who think the best friend of a nation is he who most faithfully rebukes her for her sins—and he her worst enemy, who, under the specious and popular garb of patriotism, seeks to excuse, palliate, and defend them" - Frederick Douglass

MU Fan in Connecticut

Quote from: The Sultan on August 06, 2025, 01:14:50 PMThat stupid Google AI nonsense that is on the top of every search gets things wrong about half the time I do any search. I have no idea why these companies are so obsessed with this right now.

I was getting so irritated yesterday. 
I was doing work research and googling the very uninteresting "Brass wire manufacturers in the USA" and I would get steel manufacturers, aluminum manufacturers and distributors (who are warehouses and are not manufacturers).  Well more than half of the AI results were worthless.

pbiflyer


Or more scoop speed:

JWags85

Quote from: The Sultan on August 06, 2025, 01:14:50 PMThat stupid Google AI nonsense that is on the top of every search gets things wrong about half the time I do any search. I have no idea why these companies are so obsessed with this right now.

People continually neglect to understand that low level easily accessible/usable AI is an aggregator of market opinion or the largest volume of published information on a topic or opinion.  Despite the advances and progress in overall AI tools, the baseline of ChatGPT or the Google AI at the top of searches hasn't progressed all that much since my initial post to start this thread 2+ years ago. 

Extreme example, but lets say you were the actual ruler of a new small island nation without much PR or press outside of your official webpage, no articles or interviews.  If someone said they were actually the King of said nation, did a bunch of social media and press about it.  Paid for some TV interviews and got buzz.  ChatGPT or others, as they currently exist, would likely return the name of the pretender as the King instead of the actual ruler.

The companies are obsessed because the actual high level trained models are really impressive and get a ton done.  We're still quite far away from super easy to use, point and click AI usage for the average joe.  But trained AI and AI engines/models that are being tweaked and crafted by experts are unlocking huge value.

Personally with my business, I fought with 2-3 different basic AI programs trying to create some marketing materials.  Got some cool stuff but nowhere near what I was hoping to, and it I had just spent the hours on it, instead of playing with AI, I would have done it faster.  Maybe I got myself closer to a sustainable solution in the future, but the entry level/novice usage and output was pretty worthless.  Meanwhile, one of our software engineers has been using AI to comb and format combined big data points with accompanying images and mapping to create an algorithm.  Doing so has saved MONTHS of work and trial and error takes an hour instead of a week.  We discovered our initial strategy wasn't effective but we stumbled onto a new way of doing it.  Finding this out would have taken WAY longer and may not have happened cause it would have been within that week of trial and error, not in a quick debrief after an hour.  And we're not in the same galaxy of honed in as even moderate AI users in the tech space.

Shaka Shart

Quote from: JWags85 on August 06, 2025, 02:49:48 PMPeople continually neglect to understand that low level easily accessible/usable AI is an aggregator of market opinion or the largest volume of published information on a topic or opinion.  Despite the advances and progress in overall AI tools, the baseline of ChatGPT or the Google AI at the top of searches hasn't progressed all that much since my initial post to start this thread 2+ years ago. 

Extreme example, but lets say you were the actual ruler of a new small island nation without much PR or press outside of your official webpage, no articles or interviews.  If someone said they were actually the King of said nation, did a bunch of social media and press about it.  Paid for some TV interviews and got buzz.  ChatGPT or others, as they currently exist, would likely return the name of the pretender as the King instead of the actual ruler.

The companies are obsessed because the actual high level trained models are really impressive and get a ton done.  We're still quite far away from super easy to use, point and click AI usage for the average joe.  But trained AI and AI engines/models that are being tweaked and crafted by experts are unlocking huge value.

Personally with my business, I fought with 2-3 different basic AI programs trying to create some marketing materials.  Got some cool stuff but nowhere near what I was hoping to, and it I had just spent the hours on it, instead of playing with AI, I would have done it faster.  Maybe I got myself closer to a sustainable solution in the future, but the entry level/novice usage and output was pretty worthless.  Meanwhile, one of our software engineers has been using AI to comb and format combined big data points with accompanying images and mapping to create an algorithm.  Doing so has saved MONTHS of work and trial and error takes an hour instead of a week.  We discovered our initial strategy wasn't effective but we stumbled onto a new way of doing it.  Finding this out would have taken WAY longer and may not have happened cause it would have been within that week of trial and error, not in a quick debrief after an hour.  And we're not in the same galaxy of honed in as even moderate AI users in the tech space.

That is fair to point out, the enterprise aspect of it is probably way more developable than consumer.

However, if you take OpenAI's projected revenue glide path to 125B in 2029, over half of that is attributed to consumer spending, not enterprise. I don't buy for a second you are going to get that many people paying for chatgpt.
https://www.muscoop.com/index.php?action=search

Adjust Search Parameters
Search for:
Trustworthy Dentist
 
Search results for: Trustworthy Dentist
Adjust Search Parameters
Sorry, no matches were found

MU Fan in Connecticut

Quote from: JWags85 on August 06, 2025, 02:49:48 PMPeople continually neglect to understand that low level easily accessible/usable AI is an aggregator of market opinion or the largest volume of published information on a topic or opinion.  Despite the advances and progress in overall AI tools, the baseline of ChatGPT or the Google AI at the top of searches hasn't progressed all that much since my initial post to start this thread 2+ years ago. 

Extreme example, but lets say you were the actual ruler of a new small island nation without much PR or press outside of your official webpage, no articles or interviews.  If someone said they were actually the King of said nation, did a bunch of social media and press about it.  Paid for some TV interviews and got buzz.  ChatGPT or others, as they currently exist, would likely return the name of the pretender as the King instead of the actual ruler.

The companies are obsessed because the actual high level trained models are really impressive and get a ton done.  We're still quite far away from super easy to use, point and click AI usage for the average joe.  But trained AI and AI engines/models that are being tweaked and crafted by experts are unlocking huge value.

Personally with my business, I fought with 2-3 different basic AI programs trying to create some marketing materials.  Got some cool stuff but nowhere near what I was hoping to, and it I had just spent the hours on it, instead of playing with AI, I would have done it faster.  Maybe I got myself closer to a sustainable solution in the future, but the entry level/novice usage and output was pretty worthless.  Meanwhile, one of our software engineers has been using AI to comb and format combined big data points with accompanying images and mapping to create an algorithm.  Doing so has saved MONTHS of work and trial and error takes an hour instead of a week.  We discovered our initial strategy wasn't effective but we stumbled onto a new way of doing it.  Finding this out would have taken WAY longer and may not have happened cause it would have been within that week of trial and error, not in a quick debrief after an hour.  And we're not in the same galaxy of honed in as even moderate AI users in the tech space.

My marketing department uses in Germany.
They use old product photos and ChatGP made them look new and refreshed and in new layouts.
 
They are also using to help generate LinkedIn product posts on the company page.  It keeps everything in the same recognizable format.  They only need to slightly edit before posting.

They are using for other marketing uses too

jesmu84

Quote from: The Sultan on August 06, 2025, 01:14:50 PMThat stupid Google AI nonsense that is on the top of every search gets things wrong about half the time I do any search. I have no idea why these companies are so obsessed with this right now.

Because implementation of AI in many sectors will dramatically reduce labor costs (and HR headaches,etc) for companies. So, profit motive.

Others likely see creation of real AI as a noble goal for the betterment of humanity or a chance to leave an indefinite legacy.

Shaka Shart

https://www.muscoop.com/index.php?action=search

Adjust Search Parameters
Search for:
Trustworthy Dentist
 
Search results for: Trustworthy Dentist
Adjust Search Parameters
Sorry, no matches were found

Jockey

Quote from: jesmu84 on August 06, 2025, 05:33:35 PMOthers likely see creation of real AI as a noble goal for the betterment of humanity or a chance to leave an indefinite legacy themselves and their acquisition of power and money.

fify

forgetful

#195
Quote from: The Sultan on August 06, 2025, 01:14:50 PMThat stupid Google AI nonsense that is on the top of every search gets things wrong about half the time I do any search. I have no idea why these companies are so obsessed with this right now.

I'll give a brief example of why.

Let's say you are a researcher/company that needs to do a new complex analysis of existing large data. There are chunks of software out there that do the basics of the tasks, but are written in different coding languages, and don't actually do what you want.

What you need to accomplish the task would take an experienced coder ~6-8 months working with an expert in the field to learn the specific new application, understand what each of the existing coding packages (let's say on GitHub) are doing, recompile new code in a single coding language, and debug and benchmark to ensure it is accomplishing the task at hand.

With AI, and knowing the strengths/limitations of each of the different AI engines, that expert can bypass having to hire and train an experienced coder, and can use AI to write and debug the code, analyze all the data on the local computer/server (no loss of proprietary information), and if you want write reports, graphics, and anything else you need as an output.

And instead of 6-8 months, it can be done in less than a week working part time (i.e. in a persons evenings).

Yes, AI's Gemini search sucks. But that isn't the type/application of AI that everyone is excited about.

And for the record, the above example is a real example, just simplified to keep semi anonymous. It is also why you see so few open entry level coding jobs right now.

And from a financial perspective: Cost of AI licenses (maybe $50), Cost of coding expert for 6-8 months $50-60k.

mu_hilltopper


MU82

Lead item in today's Axios e-newsletter:

Your fake friends are getting a lot smarter ... and realer, Jim VandeHei and Mike Allen write in a "Behind the Curtain" column.

Why it matters: If you think those make-believe people on Facebook, Instagram and X — the bots — seem real and worrisome now, just wait. Soon, thanks to AI, those fake friends will analyze your feeds, emotions, and habits so they can interact with the same savvy as the realest of people.

The next generation of bots will build psychological profiles on you — and potentially billions of others — and like, comment and interact the same as normal people.

This'll demand even more vigilance in determining what — and who — is real in the digital world.

A taste of the future: Brett Goldstein and Brett Benson — professors at Vanderbilt University who specialize in national and international security — show in vivid detail, in a recent New York Times op-ed, the looming danger of the increasingly savvy fake world.

They dug through piles of documents uncovered by Vanderbilt's Institute of National Security, exposing how a Chinese company — GoLaxy — optimizes fake people to dupe and deceive.

"What sets GoLaxy apart," the professors write, "is its integration of generative A.I. with enormous troves of personal data. Its systems continually mine social media platforms to build dynamic psychological profiles. Its content is customized to a person's values, beliefs, emotional tendencies and vulnerabilities."

They add that according to the documents, AI personas "can then engage users in what appears to be a conversation — content that feels authentic, adapts in real-time and avoids detection. The result is a highly efficient propaganda engine that's designed to be nearly indistinguishable from legitimate online interaction, delivered instantaneously at a scale never before achieved."

🔎 Between the lines: This makes Russia's bot farms look like the horse and buggy of online manipulation. We're talking real-time adaptations to match your moods, or desires, or beliefs — the very things that make most of us easy prey.

The threat of smarter, more realistic fake friends transcends malicious actors trying to warp your sense of politics — or reality. It hits your most personal inner thoughts and struggles.

State of play: AI is getting better, faster at mimicking human nuance, empathy and connection.

Some states, including Utah and Illinois, are racing to limit AI therapy. But most aren't. So all of our fake friends are about to grow lots more plentiful.

A Harvard Business Review study earlier this year found the number one use case of chat-based generative AI is therapy ("structured support and guidance to process psychological challenges") and companionship ("social and emotional connection, sometimes with a romantic dimension").

AI-based therapy, the article notes, is "available 24/7, it's relatively inexpensive (even free to use in some cases), and it comes without the prospect of judgment from another human being."

That research is congruent with what the biggest AI companies are finding: Humans are increasingly turning to AI to be buddies and shrinks. That brings a passel of possible problems — from unregulated robots offering bad advice, to unhealthy human attachment to an artificial thing. Some clinicians are already informally calling it "AI psychosis."

The Wall Street Journal found by examining public chat transcripts that bots sometimes egg on users' false premises. To go along with AI hallucination, clinicians are informally calling this phenomenon "AI psychosis" or "AI delusion."

There's obvious upside, too: Loneliness can be deadly, and good therapy can do great things for someone struggling. Meta, as Axios reported in May, envisions chatbots as "more social" — potentially an extension of your friend network, and antidote to the "loneliness epidemic."

What you can do: Be vigilant. This is all happening now. It's safe to assume AI only gets better, and bad actors more clever. Don't assume every person online is real — much less a real friend.

We are SOOOOO effed.
"It's not how white men fight." - Tucker Carlson

"Guard against the impostures of pretended patriotism." - George Washington

"In a time of deceit, telling the truth is a revolutionary act." - George Orwell

The Sultan

Quote from: forgetful on August 10, 2025, 11:01:06 AMI'll give a brief example of why.

Let's say you are a researcher/company that needs to do a new complex analysis of existing large data. There are chunks of software out there that do the basics of the tasks, but are written in different coding languages, and don't actually do what you want.

What you need to accomplish the task would take an experienced coder ~6-8 months working with an expert in the field to learn the specific new application, understand what each of the existing coding packages (let's say on GitHub) are doing, recompile new code in a single coding language, and debug and benchmark to ensure it is accomplishing the task at hand.

With AI, and knowing the strengths/limitations of each of the different AI engines, that expert can bypass having to hire and train an experienced coder, and can use AI to write and debug the code, analyze all the data on the local computer/server (no loss of proprietary information), and if you want write reports, graphics, and anything else you need as an output.

And instead of 6-8 months, it can be done in less than a week working part time (i.e. in a persons evenings).

Yes, AI's Gemini search sucks. But that isn't the type/application of AI that everyone is excited about.

And for the record, the above example is a real example, just simplified to keep semi anonymous. It is also why you see so few open entry level coding jobs right now.

And from a financial perspective: Cost of AI licenses (maybe $50), Cost of coding expert for 6-8 months $50-60k.


Thank you. This is very helpful.
"I am one of those who think the best friend of a nation is he who most faithfully rebukes her for her sins—and he her worst enemy, who, under the specious and popular garb of patriotism, seeks to excuse, palliate, and defend them" - Frederick Douglass

Jockey

Quote from: MU82 on August 11, 2025, 11:02:25 AMLead item in today's Axios e-newsletter:

Your fake friends are getting a lot smarter ... and realer, Jim VandeHei and Mike Allen write in a "Behind the Curtain" column.

Why it matters: If you think those make-believe people on Facebook, Instagram and X — the bots — seem real and worrisome now, just wait. Soon, thanks to AI, those fake friends will analyze your feeds, emotions, and habits so they can interact with the same savvy as the realest of people.

The next generation of bots will build psychological profiles on you — and potentially billions of others — and like, comment and interact the same as normal people.

This'll demand even more vigilance in determining what — and who — is real in the digital world.

A taste of the future: Brett Goldstein and Brett Benson — professors at Vanderbilt University who specialize in national and international security — show in vivid detail, in a recent New York Times op-ed, the looming danger of the increasingly savvy fake world.

They dug through piles of documents uncovered by Vanderbilt's Institute of National Security, exposing how a Chinese company — GoLaxy — optimizes fake people to dupe and deceive.

"What sets GoLaxy apart," the professors write, "is its integration of generative A.I. with enormous troves of personal data. Its systems continually mine social media platforms to build dynamic psychological profiles. Its content is customized to a person's values, beliefs, emotional tendencies and vulnerabilities."

They add that according to the documents, AI personas "can then engage users in what appears to be a conversation — content that feels authentic, adapts in real-time and avoids detection. The result is a highly efficient propaganda engine that's designed to be nearly indistinguishable from legitimate online interaction, delivered instantaneously at a scale never before achieved."

🔎 Between the lines: This makes Russia's bot farms look like the horse and buggy of online manipulation. We're talking real-time adaptations to match your moods, or desires, or beliefs — the very things that make most of us easy prey.

The threat of smarter, more realistic fake friends transcends malicious actors trying to warp your sense of politics — or reality. It hits your most personal inner thoughts and struggles.

State of play: AI is getting better, faster at mimicking human nuance, empathy and connection.

Some states, including Utah and Illinois, are racing to limit AI therapy. But most aren't. So all of our fake friends are about to grow lots more plentiful.

A Harvard Business Review study earlier this year found the number one use case of chat-based generative AI is therapy ("structured support and guidance to process psychological challenges") and companionship ("social and emotional connection, sometimes with a romantic dimension").

AI-based therapy, the article notes, is "available 24/7, it's relatively inexpensive (even free to use in some cases), and it comes without the prospect of judgment from another human being."

That research is congruent with what the biggest AI companies are finding: Humans are increasingly turning to AI to be buddies and shrinks. That brings a passel of possible problems — from unregulated robots offering bad advice, to unhealthy human attachment to an artificial thing. Some clinicians are already informally calling it "AI psychosis."

The Wall Street Journal found by examining public chat transcripts that bots sometimes egg on users' false premises. To go along with AI hallucination, clinicians are informally calling this phenomenon "AI psychosis" or "AI delusion."

There's obvious upside, too: Loneliness can be deadly, and good therapy can do great things for someone struggling. Meta, as Axios reported in May, envisions chatbots as "more social" — potentially an extension of your friend network, and antidote to the "loneliness epidemic."

What you can do: Be vigilant. This is all happening now. It's safe to assume AI only gets better, and bad actors more clever. Don't assume every person online is real — much less a real friend.

We are SOOOOO effed.

OK, Mike - if that is who you really are.

Previous topic - Next topic