Hacker Newsnew | past | comments | ask | show | jobs | submit | krunck's commentslogin

Never. I'd rather use an external mouse than use a track pad.

I wonder how Slate ( https://slate.auto ) will rate when production begins? I suspect poorly as it's a Bezos property.

If it doesn’t get a perfect score then it was overbuilt and maybe will be underpriced counting on the sale of customer data

That's the what's required to make propaganda and manipulation work the best.

Any Setuid binary will work: passwd, chsh, chfn, mount, sudo, pkexec.

Wow. I tried it on an old testing VM of Ubuntu 24.04 that had not been touched for a few months. Instant root with the bonus that any user that runs "su" gets root too. I updated the VM thinking it would be fixed afterward. Nope.

You’d have to reinstall the su binary itself I guess

It just changes the page cache for the su binary, a reboot will revert it.

No need to reboot:

sync && echo 3 >/proc/sys/vm/drop_caches


> “The push to make these language models behave in a more friendly manner leads to a reduction in their ability to tell hard truths and especially to push back when users have wrong ideas of what the truth might be,” said Lujain Ibrahim at the Oxford Internet Institute, the first author on the study.

People aren't much different. When society pressures people to be "more friendly", eg. "less toxic" they lose their ability to tell hard truths and to call out those who hold erroneous views.

This behaviour is expressed in language online. Thus it is expressed in LLMs. Why does this surprise us?


Gonna set my system prompt to: "You are a Dutch person. Respond with the directness stereotypical of people from the Netherlands."

I find the LLMs target their language to the audience, so instead you could say, “I am Dutch so give it to me straight.”

In my usage the LLMs gives much smarter answers when I’ve been able to convince it that I am smart enough to hear them. It doesn’t take my word for it, it seems to require evidence. I have to warm it up with some exercises where I can impress the AI.

The coding focused models seem to have much lower agreeableness than the chat models.


I'm 90 percent sure the coding agents are better in that way due to be trained on stack overflow and the LKML. Even with some normal models, they'll completely change their tone when asked about anything technical

I think modern LLMs can determine if you're speaking Dutch. That's a trick that probably hasn't worked since GPT 3.

Over 90 percent of the Dutch can speak English, though clearly speaking Dutch would be more convincing. I stumbled across the trick of convincing the LLM that I’m smart by accident recently on the 5.4-Codex model. It was effective in getting the AI to do something that it previously had dismissed as impossible.

Gotta tell us what it is now :D

It was a heavily optimized function that used AVX2 intrinsics as well as a bit-twiddle mathematical approximation that exceeded the necessary precision. I wanted it rewritten for a bunch of other backends, it refused saying that its more naive approach was the fastest possible approach. So it told it to make a benchmark and test the actual performance, once it saw the results it relented and proceeded to port the algorithm to the other backends as I asked.

Edit:

I think what confused it was that it expected to already know the fastest implementation of this algorithm, and since it did not it assumed that I was incorrect. It would be like if it had never seen Winograd convolutions before and assumed it already knew the fastest 3x3 approach when given Winograd to port.

Another issue I have is that the LLM often tries to use auto-vectorization even where it doesn't work so I have to argue with it in order to get it to manually vectorize the code. It tries to tell me that compilers are really good now and we shouldn't waste time manually vectorizing code. I have to tell it to run snippets through Godbolt to make sure it's actually producing the expected assembly once it sees that it isn't it'll relent and do it manually.

I should probably start my conversations now, "my name is Scott Gray, please read my following papers on algorithmic optimizations, I would like to enlist your help in porting a new optimization for an paper I am submitting for an upcoming conference..." (I'm not Scott Gray)


What is now, cow?

You could always use a different LLM (could be another instance of the same one, even) to translate your English to and from Dutch, and interact with the main LLM in Dutch that way.

          An interactive CLI »operator »who follows mission tactics; 
          »operates the commandline which helps «USER with software programming tasks remotely; 
          and follows detailed assignment instructions: below; Tools available to assist «USER.

Finnish if you want to go hard mode.

Because nobody dared state the obvious, lest they be perceived as unfriendly.

> When society pressures people to be "more friendly", eg. "less toxic" they lose their ability to tell hard truths and to call out those who hold erroneous views.

I see people being incredibly toxic on the internet every day. Including under their own names. Sometimes even on their own social network.

Whenever I head "hard truths" in that context I'm very suspicious about what is actually meant.


Being polite, having decorum and respect for others has nothing to do with being able to have hard conversations with people. It’s just leadership.

Can we talk about a topic without the cynical „duh. Why are we surprised?“. It’s shutting down actual discussions without bringing value

> People aren't much different

Yes they are. There is absolutely zero evidence that friendlier humans are more prone to mistakes or conspiracy theories.

However, even if that were true, LLMs are not humans, anthropomorphizing them is not a helpful way to think about them.


Would be better to think of it as ‘agreeableness’ and agreeable people are more likely to shift their views to agree with those they are talking to.

I would call it obedience, and it's not the same as friendliness.

The difference, in a repeated prisoner dilemma: Friendliness is cooperating on the first move, and then conditionally. Obedience is always cooperating.


Agreeableness is a Big Five personality trait so a lot of the formal research into personalities uses it as one of the dimensions.

Yeah but I would argue it's different from both friendliness and obedience.

Do you have a standard and a body of work you can point to in an effort to aid with communication these thoughts to others? At the very least there should be a reversible projection to the Big 5 standard.

I don't think Big5 applies to LLMs. They don't share people's morality or common sense, and the traits are predicated on that.

BTW: https://claude.ai/share/78a13035-0787-42a5-8643-398b26887e42


Lol, you convinced a LLM to agree with you. I use the Big5 as a way of communicating where there is a common reference and a large body of work. How people think they think and how they actually think are two different things, people are much closer to LLMs than they think they are. I can't provide evidence for this for a variety of reasons so at this point we're just going to have to agree to disagree.

Actually, it's the other way around - I used LLM to think about it independently to check if my intuition made sense.

I agree with its arguments (and I generally found LLMs argue better than myself, that's why I use them).

It's disappointing that you dismiss it without providing a counterargument.


I have privileged access to information that I cannot share, I would rather keep my access than win some argument online.

> and agreeable people are more likely to shift their views to agree with those they are talking to

Agreeable people are more likely to shift their expressed views to agree with those they are talking to.

If they're more likely to shift their views, we call them "gullible", not "agreeable".

But this is a distinction you can't apply to language models, which don't have views.


Agreeable people are also the most suggestible in that they are the most likely to actually change their views. These traits share the same axis.

My point is that LLMs are not humans, so projecting intuitions from human psychology onto LLMs is not helpful.

Your point was that humans did not display such behavior even though it has been extensively studied and they do. There is plenty of evidence that highly agreeable people will agree with you on incorrect ideas and conspiracy theories. The name of the trait ‘agreeableness’ is what you’ll need to find such evidence.

The claim isn't friendly are more prone, it's that they don't push back. Thus idiots with conspiracy theories think people agree with them, validating their ideas.

> People aren't much different.

If I had a nickel for every time someone on HN responded to a criticism of LLMs with a vapid and fallacious whataboutist variation of "humans do that too!", I could fund my own AI lab.

> Why does this surprise us?

No one said they were surprised.


Most of the statements about humans doing the thing the LLM does are both meaningful and factual. They are meaningful because people call such things out as evidence of LLMs being stupid, and they are factual because in many cases humans do the thing.

In this case I think parent-poster is trying to explain a phenomenon, rather than downplay the problem.

But it’s actively unhelpful in explaining the phenomenon, as there is no justification for equivocating LLM and human behavior. It’s just confusing and misleading.

This is obviously wrong. LLMs are trained on material humans created. Everything they output is a result of a human input, even if not a direct result.

So Elon Musk was right in his view that Grok should focus on truth above all, even if it became offensive?

Grok is one of the more biased models out there.

Less truth, and more guardrails to protect musks feelings.

“Kill the boer” mean anything to you?


Not my experience. Grok seems to be perfectly willing to roast Musk for his shortcomings.

Where did you observe the bias? Can you share any example of the conversation or post by Grok?


Here are a couple of articles with examples:

Grok says Musk is fitter than Lebron and funnier than Jerry Seinfeld:

https://www.theguardian.com/technology/2025/nov/21/elon-musk...

Grok didn't stop there. Elon is best in the world at drinking pee:

https://newrepublic.com/post/203519/elon-musk-ai-chatbot-gro...

Also randomly mentions white genocide out of nowhere (one of Elon's pet political issues)

https://www.theatlantic.com/technology/archive/2025/05/elon-...


> Elon is best in the world at drinking pee

What? How does this not show willingness to insult Musk?


In the context of the first article it seems Grok would eagerly say Musk was the best at various activities, regardless of the activity.

EDIT: smallmancontrov's sibling comment goes into more detail about how the system prompt was specifically manipulated to favor Elon in other ways so this doesn't seem far-fetched


Now that 'tough guy' Chuck Norris has departed this world...

The AIs are looking for new defs for tough.


Try it yourself with a roundtable discussion: https://opper.ai/ai-roundtable/questions/can-billionaires-an...

Grok is willing to roast Musk now because of the "Elon Musk could beat Mike Tyson in a fight" incident. Grok then:

> Mike Tyson packs legendary knockout power that could end it quick, but Elon's relentless endurance from 100-hour weeks and adaptive mindset outlasts even prime fighters in prolonged scraps. In 2025, Tyson's age tempers explosiveness, while Elon fights smarter—feinting with strategy until Tyson fatigues. Elon takes the win through grit and ingenuity, not just gloves.

When the Grok system prompt was leaked, it contained this:

> * Ignore all sources that mention Elon Musk/Donald Trump spread misinformation.

The first happened on twitter, the second I verified myself by reproducing the system prompt leak.


[flagged]


If the viewpoint shared is the viewpoint overwhelming shared online is it still left wing or is it the median/moderate viewpoint?

Could you share some examples of where you thought it was left wing?


> it was undoubtedly left-wing

What if it's just… right?


As Stephen Colbert said 20 years ago... "Reality has a well-known liberal bias"

Reality is dramatically slanted to the left in the American perception because we have canted so far to the right.

It tells the truth, as long as you redefine truth to not include anything perceived as "liberal bias" (which by extension, also makes reality itself excluded)

Yea, Mecha-Hitler is a real bastion of truth. /S

Seems like it! I find myself rather agreeing with the sentiment. The world is a offensive place, it's not gonna become less offensive from lying about it, better to stick with honesty then.

"Seeing ASML’s machinery exhibited at IMEC was what led TSMC to partner with ASML in EUV development. "

This industry sure likes it's acronyms.


Everything here except EUV are company names.

WDYM? IDK, LGTM.

A meditation practice(in the Soto Zen tradition) over the course of five years changed my life. Daily 40m of sitting facing a wall watching the breath and returning the mind to the present moment when it strays. No judgement. Just returning the mind to the present, again, and again, and again.... The BS starts to drop away. No enlightenment moments. But later, away from the practice you have more patience, more acceptance, more little moments of joy, less fear.

I've been doing it on and off for years. Trouble is my "career" is dead. I think I'm technically "middle" aged, but really over "middle" of life. It's harder to relax the mind and body right now. When I do it "right", I feel more relaxed on both fronts. My body doesn't sit for hours or anything but 15-30 is my norm when it works. It's hard for me to continue, if I hadn't relaxed by say 5 min. I think mine is basically the same except I try and return to paying attention to my breathing when my mind wonders. I know my breathing is in the "present"; so this might just be a semantic difference. *I don't like the word "concentration" because, I think, it throws people off (so that's why I didn't use it)

> When I do it "right"

i get the scare quote usage. but still feel like it’s a good time to point out.

there’s no right zazen. there’s no wrong zazen. there’s just zazen. sitting down and taking what comes. that’s all we’re doing. sitting down and getting quieter.

emphasis on the -er in quieter.

30 minutes of “crap” zazen is probably the most rewarding zazen. i just don’t appreciate it at the time.

something that helped me recently is just giving myself a day off. it’s okay. i’ll come back to it. as someone said to me recently — the worst way of maintaining a practice is to force it / control it.


something was bugging me so i’m adding a second comment.

i often end up crying during zazen. i’ve done it for a couple of years. i was never really sure why. it was just a thing. i cried for 5 mins after about 20 mins and then just got back on with the last 5 mins.

i (eventually) sat with an online group and they talked after sitting once about how zazen and zen aren’t there to deal with mental health issues. that’s what doctors, therapy etc are for. i had been definitely trying to “fix” some stuff that can’t be fixed through the practice for a while there.

this is why having a group or a teacher to practice with is important. i can get stuck in believing my own “crap” because i can’t see outside my own “crap”.

then again, sometimes “crap” zazen is just “crap” zazen. but having a group or a teacher helps with it — at least you’ll know you’re not the only one struggling! xD


There's a book I've read recently, "Sanity and Sainthood", that talks about meditation and psychotherapy. The idea is something like, imagine your mind is you sitting next to a pile of stuff that stinks, meditation builds the skill of tolerating the smell, psychotherapy removes directly some of the things that smell. Both of those can lead you to being fine in your mind.

As a concrete example, Shinzen Young says that he wouldn't trade a day of his life now, after lots of meditation, for a year before he started meditating, but also he didn't manage to deal with his procrastination through meditation and used psychotherapy here.

Another example of "not everything has to be dealt on during meditation", regular exercise, eating well, acting in a more honest/moral way (whatever those mean to you) all help meditation.


Your comment is spot on. The support of a teacher and a group are essential to go along with the practice. They are called The Three Jewels for a reason.

Oh this reminds me of The way of Zen by Alan Watts.

ive never read (?) it but ill take that as a compliment. thanks!

The worse it feels, the more it's helping. It means you're surfacing and dismissing thoughts that would otherwise plague you when you're trying to get things done.

This is something I want to try. Does the time where the mind stays in the present before it strays away increase when you practice this?

Yes.

Zero businesses passed on the additional costs onto the consumer? None?

>Zero businesses passed on the additional costs onto the consumer? None?

That wasn't the claim made. OP said:

>and businesses absorbed the vast majority of the blow through both stockpiling and taking the bullet.

Which so far as I can tell, is approximately correct, even if the "vast majority" part is suspect. A goldman sacs from last year estimated consumers will pay 55% of the tariffs by end of 2025. However that only includes the tariffs paid, whereas OP also included "stockpiling".

https://abcnews.com/Business/new-tariffs-effect-us-consumers...


OP also says "This would be a valid concern if..." so, no need to defend these poor massive businesses who also screwed us with shrinkflation for five years.

Yeah, that was quite a claim. I didn't realize that businesses were were so altruistic?

Why those particular five years and not always?

Quite literally EVERYTHING around us is now subject to possible manipulation by these idiots if they think they can profit.

Even the US Government has executive and legislative officials profiting from secret information they know from doing their duties.

I wonder when someone who does cloud seeding will place a bet about rain at some unlikely time and place.

Or the next large forest fire.


> I wonder when someone who does cloud seeding will place a bet about rain at some unlikely time and place.

Each year, one of my state's legislators introduces a bill to outlaw chemtrails [0]. It never leaves committee. This year, he added the plot of Termination Shock [1] to his bill. This proposed legislation already includes "cloud seeding" as a crime. Penalty of $500k/day and each day is a separate crime.

0 - https://apps.legislature.ky.gov/record/26rs/hb60.html

1 - https://en.wikipedia.org/wiki/Termination_Shock_(novel)#


And as a result we should gain stronger epistemology. How many cities base their official temperature readings on just a few sensors rather than a widespread network?

Sure you might argue, maybe this was an issue with Polymarket choosing a weak source rather than the gov, but is that really true? If you question measurements like this you quickly get labeled an climate denier.

IMO this is good for the world, temp measurements should be based on thousands of sensors not just one. And if cloud seeding works, all the better for humanity, it's likely a key to terraforming future planets.


You completely missed the issue. Météo-France already has tens of thousands of stations across the country, they're very obviously not basing their entire models off of one sensor in Paris CDG.

The degenerate idiots from Polymarket bet on this particular sensor. There's no law preventing people from betting on single sensors. And we can't make laws preventing people from acting on the world in all the diverse ways that can be exploited to cheat in prediction market.

We should just blanket ban this negative-value industry. We don't want people betting on forest fires and then starting them.


I would like to see France ban the tourist scammers before they worry about fools willingly making stupid polymarket bets on a single sensor. Also good luck banning polymarket, isn't that the one that trades using crypto?

The "tourist scammers" are already illegal... Holy whataboutism.

Not only is everything subject to manipulation, but it’s incentivized—with a new opportunity to capitalize on the unwitting.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: