Details here: https://github.com/LemmyNet/lemmy/issues/3165

This will VASTLY decrease the server load of I/O for PostgreSQL, as this mistaken code is doing writes of ~1700 rows (each known Lemmy instance in the database) on every single comment & post creation. This creates record-locking issues given it is writes, which are harsh on the system. Once this is fixed, some site operators will be able to downgrade their hardware! ;)

  • _Rho_@lemmy.world
    link
    fedilink
    arrow-up
    38
    ·
    1 year ago

    This is fascinating

    My biggest takeaway from reading through the GitHub comments though is that it seems like no one actually knows where much of the SQL comes from? As in it’s possible that the bug in question is just one manifestation of old, handwritten Postres code that may or may not be optimized (Or even logical?).

    I don’t mean this in a critical way, as things like this are bound to happen in an open-source, federated world. However, I would think a comprehensive audit of the Lemmy Postgres triggers, queries, etc could potentially save us all from some future headaches.

    • Aux@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      49
      ·
      1 year ago

      That’s not fascinating, that’s depressing. Lemmy team lacks development skill.

      • basequal@lemmy.world
        link
        fedilink
        arrow-up
        34
        ·
        edit-2
        1 year ago

        I am always fascinated with these types of comments, specifically for a free and open-source software. There are lemmy instances supporting hundreds of thousands of users and trafic, feedback from both server owners and lemmy devs is almost instantaneous.

        A platform like lemmy requires client side knowledge to build both desktop and mobile UI (that are performant), it requires ActivityPub knowledge to integrate with the Fediverse, it requires backend knowledge to build APIs for 100% feature compatibility with 3rd party apps. It requires DB knowledge to optimize queries, it requires devops / platform knowledge to deploy it.

        And all of this is built in public.

        BuT LEMmY tEaM lAcks dEvEloPmenT SkiLL – sure buddy.

        • Aux@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          18
          ·
          1 year ago

          What’s so special about open source? Shit code is shit code.

          • CurseBunny@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            4
            ·
            edit-2
            1 year ago

            It’s more to the point that a project of this scale and range in the hands of a small group of people is expected to have some shit code somewhere.

      • Zeth0s@lemmy.world
        link
        fedilink
        arrow-up
        13
        ·
        1 year ago

        Or there’s just room for improvement and optimization, as each developer has its strengths and weaknesses, as any other professional, and a system like lemmy is very complex and really requires to cover a lot from backend to front end.

        And there used to be only 2 developers.

        I once check a open source implementation of a niche product from Microsoft, and it was a nightmare of unoptimized code. And Microsoft spent a lot of development resources there.

        Creating lemmy as 2 people job is quite impressive. Luckily now there are resources for optimization

      • darth_helmet@sh.itjust.works
        link
        fedilink
        arrow-up
        11
        ·
        1 year ago

        This is such a dumb take. Using a database efficiently is not some binary, once-off thing: you build what works based on the data you have at the time. When it works, you move on to other features. It takes analysis of real operation over time to find the bottlenecks, and discipline to focus on fixing the things that will have the most benefit to your users.

        There are many successful tech companies who introduce features that create dogshit performance impacts regularly. They work because there are people looking at metrics and catching issues to fix. This is healthy.

  • Shadow@lemmy.ca
    link
    fedilink
    arrow-up
    30
    arrow-down
    1
    ·
    edit-2
    1 year ago

    It’s not on every comment, it’s mostly triggered on deletions and edits. The problem is actually infintesimally worse and 1700 rows are updated if you delete 3 comments. If you delete more it’s exponential and just straight up fails and locks your database.

    I’ll probably put a patch in there later tonight and then see about a PR unless someone else does.

    • RoundSparrow@lemmy.mlOPM
      link
      fedilink
      arrow-up
      28
      ·
      edit-2
      1 year ago

      It’s not on every comment,

      My testing with latest code is that it is indeed on every single comment INSERT, every new comment. I have the ability to view my live data while using Lemmy: https://lemmyadmin.bulletintree.com/query/raw_site_aggregates?output=table

      Every one of the 1486 rows on that table gets +1 on comment when I post a new comment on my instance.

      it’s mostly triggered on deletions and edits

      That is not correct. Edits do not change the count of comments column on site_aggregates - because the number isn’t changing. Deletes (of a comment or post) in Lemmy are also not SQL DELETE statements, they are just a delete data column in the table. That DELETE PostgreSQL trigger only gets run when a end-user cancels their Lemmy account in their profile.

      • Shadow@lemmy.ca
        link
        fedilink
        arrow-up
        11
        ·
        1 year ago

        Ah Gotcha. That’s true, but the cascading issue that causes thousands of inserts happens on a delete.

        That table update you’re looking at is blazing fast due to the relatively low number of sites, until you run it thousands of times on a delete.

    • RoundSparrow@lemmy.mlOPM
      link
      fedilink
      arrow-up
      14
      ·
      1 year ago

      We have had DBA’s, the problem is the Rust code uses ORM and an auto JSON framework that makes tracing the code time-consuming to learn.

        • RoundSparrow@lemmy.mlOPM
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          In this one case, it was hand-written SQL inside a PostgreSQL FUNCTION that the ORM knows nothing about. But there is a approach in the entire application to have live-data from PostgreSQL for every little thing.