Ask HN: Is anyone else sick of AI splattered code

75 points by throwaway-ai-qs 6 hours ago

Between code reviews, and AI generated rubbish, I've had it. Whether it's people relying on AI to write pull request descriptions (that are crap by the way), or using it to generate tests.. I'm sick of it.

Over the year, I've been doing a tonne of consulting. The last three months I've watched at least 8 companies embrace AI generated pip for coding, testing, and code reviews. Honestly, the best suggestions I've seen are found by linters in CI, and spell checkers. Is this what we've come to?

My question for my fellow HNers.. is this what the future holds? Is this everywhere? I think I'm finally ready to get off the ride.

barrell 5 hours ago

I'm not convinced it's what the future holds for three main reasons:

1. I was a pretty early adopter of LLMs for coding. It got to the point where most of my code was written by an LLM. Eventually this tapered off week by week to the level it is now... which is literally 0. It's more effort to explain a problem to an LLM than it is to just think it through. I can't imagine I'm that special, just a year ahead of the curve.

2. The maintenance burden of code that has no real author is felt months/years after the code in written. Organizations then react a few months/years after that.

3. The quality is not getting better (see gpt 5) and the cost is not going down (see Claude Code, cursor, etc). Eventually the bills will come due and at the very least that will reduce the amount of code generated by an LLM.

I very easily could be wrong, but I think there is hope and if anyone tells me "it's the future" I just hear "it's the present". No one knows what the future holds.

I'm looking for another technical co-founder (in addition to me) to come work on fun hard problems in a hand written Elixir codebase (frontend is clojurescript because <3 functional programming), if anyone is looking for a non-LLM-coded product! https://phrasing.app

  • james2doyle 5 hours ago

    Totally agree. I use for chores (generate an initial README, document the changes from this diff, summarize this release, scaffold out a new $LANG/$FRAMEWORK project) that are well understood. I have also been using it to work in languages that I can/have written in the past but are out of practice with (python) but I’m still babysitting it.

    I recently used it to write a Sublime Text plugin for me and forked a Chrome extension and added a bunch of features to. Both open source and pretty trivial projects.

    However, I rarely use it to write code for me in client projects. I need to know and understand everything going out that we are getting paid for.

    • bdangubic 4 hours ago

      > I need to know and understand everything going out that we are getting paid for.

      what is preventing you from this even if you are not the one typing it up? you can actually understand more when you remove the burden of typing, keep asking questions, iterate on the code, do code review, security review, performance review… if done “right” you can end up not only understanding better but learning a bunch of stuff you didn’t know aling the way

      • barrell 3 hours ago

        I have never met an engineer whose abilities were limited by their typing speed. Most engineers can already type far faster than they can critically think/store memories

        • bdangubic an hour ago

          My abilities are limited by time, I don’t work (never have) more than 6-7 hours per day. Hence I automate everything that requires my time (and can be automated). If I have something that saves me 10 minutes per day, that is significant for me (should be for you too if you value your time). now imagine if I have something that saves me 90 minutes per day…

        • JustExAWS 9 minutes ago

          I don’t use Claude Code or any of the newer coding assistants. I use ChatGPT and tell it the code I want with the same type of specifications I would do when I am designing an implementation. It can definitely type 200+ lines of correct code faster than I could especially since I would need to look up the API calls for the AWS SDK.

          I treat it like a junior developer.

  • koakuma-chan 5 hours ago

    I agree on all, but I also have a PTSD of the pre-LLM era where people kept telling me that my code is garbage, because it wasn't SOLID or whatever. I prefer the way it is now.

    • skydhash 5 hours ago

      SOLID is a nice sets of principles. And like principles, there are valid reasons to break them. To use or not to use is a decision best taken after you’ve become a master, when you know the tradeoffs and costs.

      Learn the rules first, then learn when to break them.

      • koakuma-chan 4 hours ago

        This is idealistic. Do you actually sit down and evaluate whether the code is SOLID or maybe it's more like you're just vibe checking it, and it doesn't actually matter if you call that SOLID or DRY or whatever letters of the alphabet you prefer. Meanwhile your project is just a PostgreSQL proxy.

        • skydhash 4 hours ago

          These are principles, not mathematical equations. It’s like drawing an human face. The general rule is that the eyes are spaced by another eye length viewed from the front. Or the intervals between the chin, the base of the nose, the eyebrows and the hairline are equal. It does not fit every face, and artists do break these rules. But a beginner breaks them for the wrong reasons.

          So there’s a lot of heuristics in code’s quality. But some time, it’s just plain bad.

      • mattmanser 4 hours ago

        I actually sat down to really learn what SOLID meant a few years ago when I was getting a new contract and it came up in a few job descriptions. Must have some deep wisdom if everyone wants SOLID code, right?

        At least two parts of the SOLID acronym are basically anachronisms, nonsense in modern coding (O + L). And I is basically handled for you with DI frameworks. D doesn't mean what most people think it does.

        S is the only bit left and it's pretty much open to interpretation.

        I don't really see them as anything meaningful, these days it's basically just make your classes have a single responsibility. It's on a level of KISS, but less general.

    • majorbugger 5 hours ago

      and what LLM has to do with your PTSD?

      • koakuma-chan 5 hours ago

        It has to do because those assholes will no longer tell me that I should have written an abstract factory or some shit. AI generated code is so fucking clean and SOLID.

  • risyachka 5 hours ago

    This.

    If someone says "Most of my code is AI" there are only 3 reasons for this 1. They do something very trivial on daily basis (and its not a bad thing, you just need to be clear about this). 2. The skill is not there so they have to use AI, otherwise it would be faster to DIY it than to explain the complex case and how to solve it to AI. 3. They prefer to explain to llm rather than write code themselves. Again, no issue with this. But we must be clear here - its not faster. Its just someone else is writing the code for you while you explain it in details what to do.

    • JustExAWS 5 minutes ago

      I have been coding professionally for 30 years and 10 years as a hobbyist before then writing assembly on four different architectures. The first 12 years professionally bit twiddling in C across multiple architectures.

      I doubt very seriously you could tell my code was LLM generated.

      I very much rather explain to an LLM than write the code myself. Explaining it to an LLM is like pre rubber ducking.

    • barrell 3 hours ago

      To be honest, I’m more inclined to attribute the rampant use of LLMs just to the dopaminergic effect of using them. It feels productive. It feels futuristic. It feels like an unlock. Quite viscerally. It doesn’t really matter your seniority or skill level, you feel can do whatever is within your wheelhouse, and more, faster.

      Like most dopaminergic activities though you end up chasing that original rush, and eventually quit when you can’t replicate it and/or realize it is a poor substitute for the real thing, and likely stunting your growth

    • bdangubic 4 hours ago

      there is a 4 and 5 and 6… :)

      here’s 4 - there are senior-level SWEs who spent their entire career automating every thing they had to do more than once. it is one of core traits that differentiates “10x” SWE from “others”

      LLMs have taken the automation part to another level and best SWEs I know use them every hour of every day to automate shit that we never had tools to automate before

nharada 5 hours ago

My biggest annoyance is that people aren't transparent about when they use AI, and thus you are forced to review everything through the lens that it may be human created and thus deserving of your attention and benefit of the doubt.

When an AI generates some nonsense I have zero problem changing or deleting it, but if it's human-written I have to be aware that I may be missing context/understanding and also cognizant of the author's feelings if I just re-write the entire thing without their input.

It's a huge amount of work offloaded on me, the reviewer.

  • kstrauser 5 hours ago

    I disagree. Code is code: it speaks for itself. If it's high quality, I don't care whether it came from a human or an AI trained on good code examples. If it sucks, it's not somehow less awful just because someone worked really hard on it. What would change for me is how tactful I am in wording my response to it, in which case it's a little nicer replying to AIs because I don't care about being mean to them. The summary of my review would be the same either way: here are the bad parts I want you to re-work before I consider this.

    • alansammarone 5 hours ago

      I've had a similar discussion with a coworker which I respect and know to be very experienced, and interestingly we disagreed on this very point. I'm with you - I think AI is just tool, and people shouldn't be off the hook because they used AI code. If they consistently deliver bad code, bad PR descriptions, or fail to explain/articulate their reasoning, I don't see any particular reason we should treat it any differently now that AI exists. It goes both ways, of course - reviewer also shouldn't pay less attention when the code is did not involve AI help in any form. I think these are completely orthogonal and I honestly don't see why people have this view.

      The person who created the PR is responsible for it. Period. Nothing changes.

      • skydhash 5 hours ago

        It does because the amount of PR goes up. So instead of reviewing, it’s more like back and forth debugging where you are doing the check that the author was supposed to do.

        • alansammarone 5 hours ago

          So the author is not a great programmer/professional. I agree with you that they should have done their homework, tested it, have a mental model for why and how, etc. If they don't, it doesn't seem to be particularly relevant to me if that's because they had a concussion or because they use AI.

          • skydhash 5 hours ago

            It’s easy to skip quality with code, starting with coding only the happy path and bad design that hides bugs. Handling errors properly can take a lot of times, and designing to avoid errors takes longer.

            So when you have a tools that can produce things that fits the happy path easily, don’t be surprised that the amount of PRs goes up. Because before, by the time you can write the happy path that easily, experience has taught you all the error cases that you would have skipped.

            • JustExAWS a minute ago

              I have been developing for 40 years as a hobbyist for 10 and a professional for 30. I always started with the happy path and made it sure it worked and then kept thinking about corner cases. If an LLM can get me through the happy path (and it often generates code to guard against corner cases) why wouldn’t I use it?

    • skydhash 5 hours ago

      Adding to the sibling comment by @jacknews. Code is much more than an algorithm, it’s the description of the algorithm that is non ambiguous and human readable. Code review is a communication tool. The basic expectation is that you’re a professional and I’m just adding another set of eyes, or you’re a junior and I’m reviewing for the sake of training.

      So when there’s some confusion, I’m going back to the author. Because you should know why each line was written and how it contributes to the solution.

      But a complete review takes time. So in a lot of places, we only do a quick scan checking for unusual stuff instead of truly reviewing the algorithms. That’s because we trust our colleagues to test and verify their own work. Which AI users usually skip.

    • jcranmer 5 hours ago

      The problem with reviewing AI-written code is that AI makes mistakes in very different ways from the way humans make mistakes, so you essentially have to retrain yourself to watch for the kinds of mistakes that AI makes.

    • bluGill 5 hours ago

      Code doesn't always speak for itself. I've had to do some weird things that make no sense on its own. I leave comments but they are not always easy to understand. Most of this is when I'm sharing data across threads - there is good reason for each lock/atomic and each missing lock. (I avoid writing such code, but sometimes there is no choice). If AI is writing such code I don't trust them to figure out those details, while I have some (only a minority but some) coworkers I trust to figure this out.

    • chomp 5 hours ago

      > What would change for me is how tactful I am in wording my response to it

      So code is not code? You’re admitting that provenance matters in how you handle it.

    • elviejo 5 hours ago

      Code is not only code.

      It's like saying physics it's just math. If we read:

      F = m*a

      There is ton of knowledge encoded in that formula.

      We cannot evaluate the formula alone. We need the knowledge behind it to see if it matches reality.

      With llms we know for a fact that if the code matches reality, or expectations, it's a happy accident.

    • roughly 5 hours ago

      The problem with AI generated code is there’s no unifying theory behind the change. When a human writes code and one part of the system looks weird or different, there’s usually a reason - by digging in on the underlying system, I can usually figure out what they were going for or why they did something in a particular way. I can only provide useful feedback or alternatives if I can figure out why something was done, though.

      LLM-generating code has no unifying theory behind it - every line may as well have been written by a different person, so you get an utterly insane looking codebase with no constant thread tying it together and no reason why. It’s like trying to figure out what the fuck is happening in a legacy codebase, except it’s brand new. I’ve wasted hours trying to understand someone’s MR, only to realize it’s vibe code and there’s no reason for any of it.

    • m463 4 hours ago

      > Code is code

      oh come on.

      That's like saying "food is food" or "an AI howto is the same as a human-written howto".

      The problem is that code that looks good is not the same as code that is good, but they are superficially similar to a reviewer.

      and... you can absolutely bury reviewers in it.

    • risyachka 5 hours ago

      This is all great except it doesn't give any reason not to label AI code.

    • jacknews 5 hours ago

      Half of the point of code review is to provide expert or alternative feedback to junior and other developers, to share solutions and create a consistent style, standard and approach.

      So no, code is not just code.

  • Workaccount2 5 hours ago

    >My biggest annoyance is that people aren't transparent about when they use AI

    You get shamed and dismissed for mentioning that you used AI, so naturally nobody mentions they used AI. They mention AI the first time, see the blow back, and never mention it again. It just shows how myopic group-think can be.

madamelic 5 hours ago

As others have said, LLM generation of code is no excuse for not self-reviewing, testing, and understanding their own code.

It's a tool. I still have the expectation of people being thoughtful and 'code craftspeople'.

The only caveat is verbosity of code. It drives me up the wall how these models try to one-shot production code and put a lot of cruft in. I am starting to have the expectation of having to go in and pare down overly ambitious code to reduce complexity.

I adopted LLM coding fairly early on (GPT3) and the difference between then and now is immense. It's a fast-moving technology still so I don't have the expectation that the model or tool I use today will be the one I use in 3 months.

I have switched modalities and models pretty regularly to try to keep cutting edge and getting the best results. I think people who refuse to leverage LLMs for code generation to some degree are going to be left behind. It's going to be the equivalent, in my opinion, of keeping hard cover reference manuals on your desk versus using a search engine.

yomismoaqui 5 hours ago

AI coding should be better with a little profesionalism thrown in. I mean, if you have commit that code you are responsible for it. Period.

And I say this as a grumpy senior that has found a lot of value in tools like Copilot and specially Claude Code.

cadamsdotcom 2 hours ago

Make your agent do TDD.

Claude struggles with writing a test that’s meant to fail but it can be coaxed into doing it on the second or third attempt. Luckily it does not struggle with me insisting the failure be for the right reason. (As opposed to failing because of a setup issue or a problem elsewhere in the code)

When doing TDD with Claude Code I lean heavily on asking the agent two things: “can we watch it fail” and “does it fail for the right reason”. These questions are generic enough to sleepwalk through building most features and fixing all bugs. Yes I said all bugs.

Reviewing the code is very pleasant because you get both the tests and production code and you can rely on the symmetry between them to understand the code’s intent and confirm that it does what it says.

In my experience over multiple months of greenfield and brownfield work, Claude doing TDD produces code that is 100% the quality and clarity I’d have achieved had I built the thing myself, and it does it 100% of the time. Big part of that is because TDD compartmentalizes each task making it easy to avoid a single task having too much complexity.

duxup 5 hours ago

I'm really not seeing a lot of code that I can say is bad AI code.

I and my coworkers use AI, but the incoming code seems pretty ok. But my view is just my current small employer.

  • dgunay 2 hours ago

    Most of the AI generated code I review is pretty much okay. Usually does what it should and meets some standard of quality. But it usually looks and feels just slightly stylistically foreign compared to the code around it. I personally edit mine before code review so that it looks how I would have written it, but there are many chunks of code in our codebase now where the author and reviewer didn't do that.

  • whycome 4 hours ago

    The breadth of industry is so vast that people have wildly different takes on this. For a lot of simple coding tasks (eg custom plugins or apps) an LLM is not only efficient, but extremely competent. Some traditional coders are having a harder time working with them when a major challenge comes from defining the problem and constraints well. It’s usually something kept in head. So, new skill sets are emerging and being refined. The ones that thrive here will not be coders, but it will be generalists with excellent management and communication skills.

    • duxup 4 hours ago

      Yeah most of my team is using an LLM for "make this function better", or learning, or just somewhat smaller bites or code that an LLM will work well with. So we don't see the "hey rewrite this whole 20 year old complicated application, omg it didn't work" kind of sitiatons.

vegancap 5 hours ago

Yeah, I get the feeling. I'm torn to be honest, because I quite enjoy using it, but then I sift through everything line by line, correct things, change the formatting. Alter parts it's gotten wrong. So for me, it's saving me a little bit of time manually writing it all out. My colleagues are either like me, or aren't sold on it. So I think there's a level of trust and recognition that even if we are using it, we're using it cautiously, and wouldn't just YOLO some AI generated code straight into main.

But we're a really small but mature engineering org, I can't imagine the bigger companies with hundreds of less experienced engineers, just using it without car and caution, it must just cause absolutely chaos (or will soon).

  • dgunay 2 hours ago

    I use it as more of a focusing tool. Without it, I frequently get distracted by small rabbit holes (I should add more logging to this function, oh I should also add some doc comments, oh I should also refactor this, etc) or I don't have the energy to do small touch ups like that. Having a bunch of agents do tiny fixes in the background on separate branches keeps me on task, prevents me from bloating PRs, and makes it more likely that I choose to do these small QoL improvements _at all_.

twalichiewicz 5 hours ago

I get why it feels bleak—low-effort AI output flooding workflows isn’t fun to deal with. But the dynamic isn’t new. It only feels unprecedented because we’re living through it. Think back: the loom, the printing press, the typewriter, the calculator.

When Gutenberg’s press arrived, monks likely thought: “Who would want uniform, soulless copies of the Bible when I can hand-craft one with perfect penmanship and illustrations? I’ve spent my life mastering this craft.”

But most people didn’t care. They wanted access and speed. The same trade-off shows up with mass-market books, IKEA furniture, Amazon basics. A small group still prizes the artisanal version, but the majority just wants something that works.

  • kipchak 3 hours ago

    I'm not sure if it's so much that most people don't care, but that hand crafted items are more expensive. As evidence of popular interest, "craftwashing"[1] mass produced goods with terms like "artisanal", and "small-batch" can be an effective marketing strategy. Using the example of a bible, a 1611 King James facsimile still commands a hefty premium[2] over a regular print. Or for paintings, who would prefer a print over an original?

    There's also the "Cottagecore" aesthetic that was popular a few years ago, which is conceptually similar to the Arts and Crafts movement or the earlier Luddites.

    [1]https://www.craftbeer.com/craft-beer-muses/craftwashing-happ...

    [2]https://www.thekjvstore.com/1611-king-james-bible-regular-fa...

  • whycome 4 hours ago

    Did you just basically coin “artisanal code”?

juancn 5 hours ago

I only use small local models like those of IntelliJ (under 100M each), which just save you the tedium of typing some common boilerplate.

But I don't prompt them, they typically just suggest a completion, usually better than what we had before from pure static analysis.

Anything more it detracts. I learn nothing, and the code is believable crap, which requires mindbogglingly boring and intense code reviews.

It's sometimes fine to prototype throw-away code (specially if you don't to intend to invest in learning the tech deeply), but I don't like what I miss by not doing the thinking by myself.

throwacct 4 hours ago

I'm using "AI" almost exclusively to scaffold projects. I spent 2 days trying to find the reason the code wasn't working the way it supposed to work. Where I work we're using it with moderation, knowing that if you generate code, you must double check everything and confirm that what you generated doesn't smell. You'll be held accountable if something brakes because you were eager to push unreviewed code.

greenavocado 5 hours ago

AI generated code by Claude Sonnet, Kimi K2 0905, GLM-4.5 is not good enough to simultaneously maintain structure and implement features in complex code without doing insane things like grossly violating each SOLID principle. If you impose too much structure upon them, they fall apart as they don't truly understand the long range ramifications of their code too often. These assistants are best suited for generating highly testable snippets of code and pushing them to work in a large codebase pushes their capabilities too far, too often.

Herring 5 hours ago

AI will keep improving

https://epoch.ai/blog/can-ai-scaling-continue-through-2030

https://epoch.ai/blog/what-will-ai-look-like-in-2030

There's a good chance that eventually reading code will become like inspecting assembly.

  • epicureanideal 5 hours ago

    > There's a good chance that eventually reading code will become like inspecting assembly.

    We don’t read assembly because we read the higher level code, which deterministically is compiled to lower level code.

    The equivalent situation for LLMs would be if we were reviewing the prompts only, and if we had 100% confidence that the prompt resulted in code that does exactly what the prompt asks.

    Otherwise we need to inspect the generated code. So the situation isn’t the same, at least not with current LLMs and current LLM workflows.

    • YeGoblynQueenne 5 hours ago

      >> We don’t read assembly because we read the higher level code, which deterministically is compiled to lower level code.

      I think the reason "we" don't read, or write, assembly is that it takes a lot of effort and a detailed understanding of computer architecture that are simply not found in the majority of programmers, e.g. those used to working with javascript frameworks on web apps etc.

      There are of course many "we" who work with assembly every day: as a for instance, people working with embedded systems, or games programmers as another.

  • savorypiano 3 hours ago

    Except the point is that you shouldn't need to inspect your assembly. Assembly does exactly what the higher level code tells it to (just about) whereas you cannot read your English prompt and rely on it.

  • runjake 5 hours ago

    > AI will keep improving

    Agree. But most code already generated won't be improved until many years from now.

    > There's a good chance that eventually reading code will become like inspecting assembly.

    Also agree, but I believe it will be very inefficient and complex code, unlike most written assembly.

    I'm not sure tight code matters to anyone but maybe 0.0001% of us programmers, anymore.

breppp 4 hours ago

Due to Brandolini's Law, there's an asymmetry between the time it takes to generate crap code and the time it takes to review crap code.

That what makes it seem disrespectful, as if someone is wasting your time when they could have done better

bigstrat2003 5 hours ago

I think people will eventually wake up and realize LLMs aren't actually good for generating code, but it might take a while. The hype train is rolling at full steam and a lot of people won't get off until they get personally burned.

gerash 5 hours ago

One downside IMHO is reimplementing the same building blocks rather than refactoring and reusing because it’s cheap to reimplement.

andrewstuart 5 hours ago

No I love it.

When I see AI code I feel excited that the developer is building stuff beyond their previous limits.

  • bluefirebrand 2 hours ago

    This is only true if AI is producing anything beyond anyone's previous limits

    For anyone beyond the most beginner juniors this is absolutely not true

add-sub-mul-div 5 hours ago

I am so glad I spent 25 years in this field, made my bag, and got out right before it became the norm to stop doing the fun part of the job yourself.

sys13 5 hours ago

> the best suggestions I've seen are found by linters in CI, and spell checkers

I don't think this is a rational take on the utility of AI. You really are not leveraging it well.

codingdave 5 hours ago

I think we're going to look back on this time as "Remember when basically all new software dev spun its wheels for years while everyone tried to figure out where AI fit in?"

I'm not sick of AI. I'm just sick of people thinking that AI should be everything in our industry. I don't know how many times I can say "It is just a tool." Because it is. We're 3 years deep into LLM-based products, and people are just now starting to even ask... "Hey, where are the strengths and weaknesses of this tool, and best practices for when to use it or not?"

codr7 5 hours ago

Not my future.

incomingpain 5 hours ago

>I think I'm finally ready to get off the ride.

I'm sorry you feel that way. Yes, this is probably the future.

AI is a new tool or really a huge new category of different AI tools that will need time to gain competency on.

AI doesnt eliminate the need for developers, it's just a whole new load of baggage and we will NEVER get to the point where that new pile of problems becomes 0.

A tool that gemini cli really loves if Ruff, I run it often :)

apple4ever 5 hours ago

I'm sick of AI in general.

MongooseStudios 5 hours ago

I'm sick of AI everything. Every day I hope today is the day the grift machine finally implodes.

In the short term it's going to make things suck even more, but I'm ready to rip that bandaid off.

P.S. To anyone that is about to reply to this, or downvote it, to tell me that AI is the future, you should be aware that I also hope someone places a rotting trout in your sock drawer each day.

  • Workaccount2 5 hours ago

    I get this take because of the existential threat, but it ignores that at least right now, LLMs are enabling people to get more from their computer than ever before.

    Maybe LLMs can't build you an enterprise back-end for thousands of users. But they sure as shit can make you a bespoke applet that easily tracks your garage sale items. LLMs really shine in greenfield <5k LOC programs.

    I think it's largely a mistake for devs to think that LLMs are made for them, rather than for enabling regular people to get far more mileage out of their computers.

  • StellaMary 5 hours ago

    No hate mate it's real that ai can really code anything u need but the brutal fact is you need to be much smarter to validate its code all I can see is skill and xp issue. But as Rust node Js dev I made ultra fast http framework using AI with just 200 lines of code it had beaten fastify hono and express. It's all possible just because of my xp in ffi and Rust with node js Architect lessons.

    https://www.npmjs.com/package/brahma-firelight

thesuperbigfrog 5 hours ago

If you eat lots of highly processed food, don't be surprised if it makes you less healthy.

sexyman48 5 hours ago

I'm finally ready to get off the ride

c ya, wouldn't wanna b ya.