• 7 Posts
  • 38 Comments
Joined 2 months ago
cake
Cake day: April 4th, 2025

help-circle


  • I honestly don’t understand how this protocol can protect anything HTTP+HTML wouldn’t. If you build a browser that supports modern web technologies using Gemini, we’ll be back at the same spot. The only thing saving the protocol is its relative obscurity. A decicated and knowledgeable Dev could abuse it any way they like, no?

    No. Just as examples:

    • If the protocol does not support JavaScript, the server cannot ask the client to run script code which strip-searches your computer for fingerprinting information.
    • If the protocol does not support tracking pixels and inline images, a server can’t use them.
    • If the protocol transmits only text, the server won’t know width and height of the screen, or names and geometry of your set of fonts.

    Oh, and all that makes the “small web” uninteresting for advertising.

    Of course, you could publish a blog in web pages which consist of plain ol’ HTML like in 1993. But setting up even a simple HTTP server is a lot of work. Most users won’t turn off JavaScript. And to many people, the modern WWW is a lost cause. And given Firefox’ dependency on Google, this isn’t to get better.


  • HaraldvonBlauzahn@feddit.orgOPtoOpen Source@lemmy.mlProject Gemini FAQ
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    7 hours ago

    But who actually still writes HTML by hand?

    One could also argue that formatting web content in Markdown breaks compatibility and one should rather use HTML for formatting comments, because it is the standard.

    The Gemini markup and protocol are designed to be simple, and the markup is designed to be written by hand. This gives you a workflow very similar to a wiki, without any extra infrastructure needed - and this is what makes a decentralized web possible. For normal people, setting up a standard web server for a small blog is too complicated, and costs too much time.

    And for protocol conversion, there are gateways, much like you can access FTP or gopher servers in a browser.


  • HaraldvonBlauzahn@feddit.orgOPtoOpen Source@lemmy.mlProject Gemini FAQ
    link
    fedilink
    arrow-up
    6
    arrow-down
    4
    ·
    edit-2
    9 hours ago

    still not sold on gemini. the project has sort of a holier-than-thou smell to it, striving for the sort of technological purity that makes it unattractive to use. i would still choose gopher.

    Does it annoy you when people try and make stuff that matches their values?

    More comfortable with the killings that FB contributed to in Myanmar or in the Philippines? Or attacks on democracy like this one?

    The power concentration of the “modern” Internet has consequences - and not good ones.

    But me personally, even if it would not matter to me what effects power concentration, targeted advertising, disinformation and so on have, it still would annoy the hell out of me that one cannot open some web sites on a two-year old medium priced smart phone because everything is stuffed to the brim with bloat and tracking.



  • Gemini is kinda a modernized version to the old Gopher protocol. Its purpose is to share hyper-linked text documents and files over a network - in the simplest way possible. It uses a simple markup language to create text documents with links, headings etc.

    Here is a FAQ

    Main differences with similar technologies are:

    • It is much, much easier to write hyper-linked documents than in HTML

    • a server is much much smaller and easier to set up than a web server serving HTML. It can easily and securely run on a small Raspberry Pi without special knowledge on server security.

    • in difference to gopher, it supports modern things like MIME and Unicode

    • There are clients for every platform including Android and iOS

    • also, there are Web gateways which allow to view stuff in a normal web browser

    • unlike Wikis, it is only concerned about distributing content, not modifying files. This means that the way to store and modify content can be matched to the use case: Write access to content can be via an NFS or Samba server, or via an SFTP client like WinSCP or Emacs.

    • the above means that it does not need user authentication

    • the protocol is text-centric and allows for distraction-free reading, which makes it ideal for self-hosted blogs or microblogs.

    Practically, for example, I use it to share vacation photos with family.

    Two more use cases that come first to my mind:

    • When I did my masters thesis, our lab with about 40 people had a HTTP page hosted on a file server that listed tools, data resources, software, and contact persons. That would be easier to do with Gemini because the markup is simpler. Also, today it would not be feasible to give every student write access to a wen server’s content because of the complexity of web servers, and the resulting security implications.

    • One time at work, we had a situation with a file server with many dozens of folders, and hundreds of documents. And because all the stuff had been growing kinda organically over many years, specific information was hard to find. A gemini server would have made it easy to organize and browse the content as collaboratively edited hypertext which serves as an index.



  • Can somebody summarize the issue? I was thinking that wayland and Xorg are different projects? So what is the incentive that people stop using X11? It is also not like Python2 where any effort to support it further would retract ressources from Python developers developing Python3. (And compare that to Perl6 developers renaming it “Raku” and continuing to support Perl 5, or SBCL developers just quietly adding support for Unicode -Python3’s most consequential change - without breaking existing stuff?)

    And one thing more, we saw companies taking influence in Web standards like HTTP 2.0. Yes, it is still open standard and supported by FLOSS software - but one cannot deny that many development in the modern web like advertising, tracking, data collection, and centralization are not in the interest of users, and this us why the interests behind specific standards matter. Technology is not free of interests and technological change is not automatically in the interests of users.



  • Oh, and there is also bup, which might be what you are looking for:

    https://bup.github.io/

    • it stores files in version-controlled copies which can be synced. Perhaps good for backing up photos and such, up to a few GB.

    Two more interesting solutions:

    1. Nix OS and Guix SD let you define a system entirely from single configuration file, so it is easy to re-create when needed.
    2. The Btrfs and ZFS file systems allow to take snapshots in an instant which can very efficiently store earlier versions of files. I used that when working with yocto/bitbake, which compiles an entire embedded system from source - it can handle much larger data volumes than git or bup, and is the right thing when handling versions of binary data.

    And one more, the rsync tool allows to store hard-linked copies of directory trees.

    The key question is however - what do you want?

    • being able to recover earlier versions is essential when working with source code
    • being able to merge such versions in text files is necessary when working on code cooperatively with others - and only source control systems can do this well
    • In 99.9% of the other cases, you just want to be able to re-create a single ground-truth version of all your data after a disaster, and keep that backup copy as current as possible.

    These are not the same requirements, especially the volume of data will differ.

    And also, while you might to want or need to go patch by patch through conflicting source code tree with 10,000 different lines, I guess that absolutely nobody is willing or has time to go through a tree with 10,000 conflicting photographs and match them.

    So the question back is: What is your specific use case and what exactly do you want to achieve?










  • Well, my main reason to use Zim Wiki and Gollum is that all the information stays on my computers -no sync service is needed, I sync via git + ssh to a Raspberry Pi that runs in my home. And this is a critical requirement for me since as a result of many experiences, my trust in commercial companies that collect data to respect data privacy has reached zero.

    The differences between Zim and Gollum are gradual: Zim is tailored as a Desktop Wiki, so each page is already in editing mode which is slightly quicker, while Gollum is more like a classical server-based wiki, which is normally accessed over the browser (but by default, without user authentication). The difference is a bit blurry since both just modify a git repo, and Gollum can be run in localhost, so it is good for capturing changes on a laptop while on the road, and syncing them later. A further difference is that Zim is a but better for the “quick but not (yet) organized” style of work, while Gollum is better for a designed and maintained structure.

    Both can capture media files and support different kinds of markup, while always storing in plain text. Gollum can also handle well things like PDFs which are displayed in the browser, and supports syntax highlighthing in many programming langages, which makes it nice for programming projects - it is perfect for writing outlines and documentation of software, and I often work by writing documentation first.