Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My experience thus far is that LLMs can be quite good at:

* Information lookup

-- when search engines are enshittified and bogged down by SEO spam and when it's difficult to transform a natural language request into a genuinely unique set of search keywords

-- Search-enabled LLMs have the most up to date reach in these circumstances but even static LLMs can work in a pinch when you're searching for info that's probably well represented in their training set before their knowledge cutoff

* Creatively exploring a vaguely defined problem space

-- Especially when one's own head feels like it's too full of lead to think of anything novel

-- Watch out to make sure the wording of your request doesn't bend the LLM too far into a stale direction. For example naming an example can make them tunnel vision onto that example vs considering alternatives to it.

* Pretending to be Stack Exchange

-- EG, the types of questions one might pose on SE one can pose to an LLM and get instant answers, with less criticism for having asked the question in the first place (though Claude is apparently not above gently checking in if one is encountering an X Y problem) and often the LLM's hallucination rate is no worse than that of other SE users

* Shortcut into documentation for tools with either thin or difficult to navigate docs

-- While one must always fact-check the LLM, doing so is usually quicker in this instance than fishing online for which facts to even check

-- This is most effective for tools where tons of people do seem to already know how the tool works (vs tools nobody has ever heard of) but it's just not clear how they learned that.

* Working examples to ice-break a start of project

* Simple automation scripts with few moving parts, especially when one is particular about the goal and the constraints

-- Online one might find example scripts that almost meet your needs but always fail to meet them in some fashion that's irritating to figure out how to coral back into your problem domain

-- LLMs have deep experience with tools and with short snippets of coherent code, so their success rate on utility scripts are much higher than on "portions of complex larger projects".



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: