Lemmy 0.19.5 has been released with some bug fixes, and we haven’t upgraded to 0.19.4 yet, so I’m planning on doing the upgrade (to 0.19.5) this weekend.

Relase notes here: 0.19.4 / 0.19.5

No specific time, but in my test run it was less than 10 mins. This assumes that nothing goes wrong 🙂

I’ll do it over the weekend when we normally have lower traffic.

As always, I’ll post updates in the Matrix chat.

If anyone knows of any reason why we should hold off on the update, that would be good to know too!

  • Lodion 🇦🇺@aussie.zone
    link
    fedilink
    arrow-up
    3
    ·
    5 months ago

    The biggest gotcha with 0.19.4, is the required upgrades to postgres and pictrs. Postgres requires a full DB dump, delete, recreate. Pict-rs requires its own backend DB migration, which can take quite a bit of time.

    • Dave@lemmy.nzOPM
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      5 months ago

      Thanks! I did pictrs some time back when I was setting up the cache cleaner that needed pictrs 0.5. And then in anticipation of this update I did the postgres 16 update a week or two back.

      I know the instructions say you have to dump and import the database, but postgres supports an in-place upgrade. I used this helper container: https://github.com/pgautoupgrade/docker-pgautoupgrade

      Basically you swap the postgres image to this one and set the tag to the version you want, then recreate the container, watch the logs until it’s done, shut it down, and swap the tags to the new postgres image and recreate the container again. It handles the magic and it happens really quickly.

      Hey on another note, do you know your federation with Lemmy.world is super far behind and you’re losing content because Lemmy stops trying when activities are over a week old? We had the same issue a while back but managed to solve it with a Finland VPS that batches the activities to send them. I’m happy to point you to someone who can help if you’re interested!

      • Lodion 🇦🇺@aussie.zone
        link
        fedilink
        arrow-up
        2
        ·
        5 months ago

        Hey Dave, yeah I know about the issues with high latency federation. Have been in touch with Nothing4You, but not discussed the batching solution.

        Yes losing LW content isn’t great… but I don’t really have the time to invest to implement a work around. I’m confident the lemmy devs will have a solution at some point… hopefully when they do the LW admins upgrade pronto 🙂

        • Dave@lemmy.nzOPM
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          It’s good to hear you are aware of it and in contact with Nothing4You.

          It unfortunately doesn’t have an obvious solution, and seems there’s no consensus yet. See the github issue. So I wouldn’t hold my breath on getting an update to fix it any time soon. Also LW are always careful about updates, often taking months to upgrade. They should be especially careful when it’s an upgrade to change how federation works.

          The work to install the batcher isn’t that bad. The software is all there, you just add a container to your Lemmy stack and then run an ansible playbook to set up the remote VPS automatically. It was the first time I’d used ansible properly and it wasn’t too hard.

          Nothing4You can hold your hand through it, it’s worth doing!

  • BlueÆtherA
    link
    fedilink
    arrow-up
    2
    ·
    5 months ago

    good luck, now that you have done the hard[er] parts the lemmy update should go easy