• 0 Posts
  • 7 Comments
Joined 1 year ago
cake
Cake day: November 23rd, 2024

help-circle


  • I think you really nailed the crux of the matter.

    With the ‘autocomplete-like’ nature of current LLMs the issue is precisely that you can never be sure of any answer’s validity. Some approaches try by giving ‘sources’ next to it, but that doesn’t mean those sources’ findings actually match the text output and it’s not a given that the sources themselves are reputable - thus you’re back to perusing those to make sure anyway.

    If there was a meter of certainty next to the answers this would be much more meaningful for serious use-cases, but of course by design such a thing seems impossible to implement with the current approaches.

    I will say that in my personal (hobby) projects I have found a few good use cases of letting the models spit out some guesses, e.g. for the causes of a programming bug or proposing directions to research in, but I am just not sold that the heaviness of all the costs (cognitive, social, and of course environmental) is worth it for that alone.


  • I’ve been exclusively reading my fiction books (all epubs) on Readest and absolutely love it. Recently I also started using it for my nonfiction books and articles (mostly pdf) as an experiment, and it’s workable but a little more rough around the edges still.

    You can highlight and annotate, and export all annotations for a book once you are done, for which I have set up a small pipeline to directly import them into my reference management software.

    It works pretty well with local storage (though I don’t believe it does ‘auto-imports’ of new files by default) and I’ve additionally been using their free hosted offering to sync my book progress. It’s neat and free up to 500mb of books, but you’re right that I would also prefer a byo storage solution, perhaps in the future.

    The paid upgrades are mostly for AI stuff and translations which I don’t really concern myself with.



  • Open source/selfhost projects 100% keep track of how many people star a repo, what MRs are submitted, and even usage/install data.

    I feel it is important to make a distinction here, though:

    GitHub, the for-profit, non-FOSS, Microsoft-owned platform keeps track of the ‘stars of a repo’, not the open-source self-host projects themselves. Somebody hosts their repo forge on Codeberg, sr.ht, their own infrastructure or even GitLab? There’s generally very little to no algorithmic number-crunching involved. Same for MR/PRs.

    Additionally - from my knowledge - very few fully FOSS programs have extensive usage/install telemetry, and even fewer opt-out versions. Tracking which couldn’t be disabled I’ve essentially never heard of in that space, because every time someone does go in that direction the public reaction is usually very strong (see e.g. Audacity).


  • Interesting, so Metal3 is basically kubernetes-managed baremetal nodes?

    Over the last years I’ve cobbled together a nice Ansible-driven IaC setup, which provisions Incus and Docker on various machines. It’s always the ‘first mile’ that gets me struggling with completely reproducible bare-metal machines. How do I first provision them without too much manual interference?

    Ansible gets me there partly, but I would still like to have e.g. the root file system running on btrfs which I’ve found hard to accomplish with just these tools when first provisioning a new machine.