• 0 Posts
  • 105 Comments
Joined 10 months ago
cake
Cake day: September 10th, 2023

help-circle

  • The fun comes when there is no actual data model. All in all, I’d say being familiar with the data model is about 60% of my job. 35% is building queries and query scripts for people who need regular exports. 5% is running after other people’s fuckups.

    Strap in, because this is a ride.

    There is a raw database from a decade-and-a-half old app, which I get to access through a layer of views that does some joining, but not all, with absolutely no documentation on how the original database is structured or where things are pulled from or what anything refers to. No data dictionary, no list or map of key relations, some objects are mapped in two different views, no semantic naming of columns.

    If you want to want to query order part delegations by who they’re assigned to (Recipient in the app) you need to use the foreign key RefAssignmentUnit. The “Assignment” unit that did the delegation is just RefUnit. If you have orders that were created by a salesperson on behalf of a customer, OrderingPerson (also a foreign key, but not named Ref-) is the customer, while OrderingPerson2 is the salesperson that entered the order. Don’t confuse that with Creator, which for orders created through the web form is usually a technical user, unless the salesperson is one of the veterans that use the direct app in which case it’ll be the salesperson while OrderingPerson2 is null.

    Also, we have many-to-many relationships that are mapped through reference tables… whose columns are named object and reference for each and every one. Have fun trying to memorize which refers to which so you don’t need to look it up every damn time.

    Create my own views to clean this up? Nope, only the third party service providers for the app can do that, and they don’t wanna. Our internal app admin (singular) can use some awkward tool to generate those views, but there’s no reverse lookup to see what a given column refers to. Also, they have no concept for what actually constitutes a good model because they’re not really familiar with the database, just with the app.

    Get my own serverless DB to create views that query the original DB? No can do, you’d need to order a whole server and that’s pricy.
    Get a cloud DB? Sure, but it will be managed by the cloud team and if you want to have or edit custom views, you’ll get to create a project request. They’ll put it in the backlog and work it into some future sprint.

    Get literally any tool that allows me to efficiently create reusable data prep so I don’t have to copy & paste the base transformations needed for a given query every fucking time and if the source DB ever changes I need to update all my query scripts? If you can somehow squeeze the time to prepare a convincing pitch - a full Power Point presentation, of course - between all your tedious and redundant query preparation and script maintenance, find a management sponsor willing to hear you out and hopefully propose your request to their superiors. Best case: It becomes a whole project - alternatives will have to be considered first, implications, security, costs, and you’ll be the one having to assemble and present that information to management only to have some responsible person point out that it would actually be the remit of a different team… that also works in sprints, has a backlog and will give you no control over your prep.

    And obviously, the app provider doesn’t give us any advance notice of just what will change in the DB with the next update. We only learn that when a view breaks. The app admin can use the tool to refresh the affected views then, while I scramble to determine all the scripts that need to be updated and copy&paste the fix. If a user has been granted their own access to the database, odds are they’ll come crying to me when their modified versions of my queries break.

    There is a lot I like about my job, I acknowledge the difficulties of a historically grown system and service contracts, but the rigid and antiquated corporate culture can go take a long walk off a short pier.



  • Ah, gotcha. Yeah, that’s one of those cases where you either add support yourself (provided you have the time, know-how - which most already don’t - and commitment) or wait until hopefully someone else does. Or - like me - you curse and go back to X11 until something gives you enouhh confidence to try Wayland again. I think I read somewhere on this platform that there will be (or was?) some Nvidia driver update that should help with Wayland support, but I haven’t looked into it.

    I don’t have much experience with laptop hardware. I did have one elderly laptop running Ubuntu, though it probably would have been served better with something more lightweight (I just didn’t know much about anything at the time). But that wasn’t doing anything intensive, just some Uni exercises. I think a simple neural network was the most challenging thing it ever had to handle.






  • The first problem, as with many things AI, is nailing down just what you mean with AI.

    The second problem, as with many things Linux, is the question of shipping these things with the Desktop Environment / OS by default, given that not everybody wants or needs that and for those that don’t, it’s just useless bloat.

    The third problem, as with many things FOSS or AI, is transparency, here particularly training. Would I have to train the models myself? If yes: How would I acquire training data that has quantity, quality and transparent control of sources? If no: What control do I have over the source material the pre-trained model I get uses?

    The fourth problem is privacy. The tradeoff for a universal assistant is universal access, which requires universal trust. Even if it can only fetch information (read files, query the web), the automated web searches could expose private data to whatever search engine or websites it uses. Particularly in the wake of Recall, the idea of saying “Oh actually we want to do the same as Microsoft” would harm Linux adoption more than it would help.

    The fifth problem is control. The more control you hand to machines, the more control their developers will have. This isn’t just about trusting the machines at that point, it’s about trusting the developers. To build something the caliber of full AI assistants, you’d need a ridiculous amount of volunteer efforts, particularly due to the splintering that always comes with such projects and the friction that creates. Alternatively, you’d need corporate contributions, and they always come with an expectation of profit. Hence we’re back to trust: Do you trust a corporation big enough to make a difference to contribute to such an endeavour without amy avenue of abuse? I don’t.


    Linux has survived long enough despite not keeping up with every mainstream development. In fact, what drove me to Linux was precisely that it doesn’t do everything Microsoft does. The idea of volunteers (by and large unorganised) trying to match the sheer power of a megacorp (with a strict hierarchy for who calls the shots) in development power to produce such an assistant is ridiculous enough, but the suggestion that DEs should come with it already integrated? Hell no

    One useful applications of “AI” (machine learning) I could see: Evaluating logs to detect recurring errors and cross-referencing them with other logs to see if there are correlations, which might help with troubleshooting.
    That doesn’t need to be an integrated desktop assistant, it can just be a regular app.

    Really, that applies to every possible AI tool. Make it an app, if you care enough. People can install it for themselves if they want. But for the love of the Machine God, don’t let the hype blind you to the issues.





  • I’ve once had difficulties running some apps on Proton that used .NET features not supported by mono, which has been updated since then and is now working out of the box.

    I’m playing Trackmania on wine, I’ve played Elden Ring and Monster Hunter: World on Proton, so I’m wondering which issue you’re running into.

    Regardless, building precompiled Linux native binaries is a commendable goal. Others have mentioned Flatpak, which imo is a good and user-friendly way to handle that.


  • Do you mean the individual .git repository tracking changes in a given directory? Or the remote repository server that you push your changes to and can pull other’s changes from? The first one is the fundamental requirement of using git at all, the second is where it gets less trivial.

    It’s not that the software isn’t available. Off the top of my mind, Gitlab offers their community version for free to download and host yourself. I think they even have a Docker image. All you need is to figure out how you would like to do that.

    It’s the usual question of self-hosting - where would you host it? A server at home? The cloud? Should others be able to access it? How? What about security?

    Remotes already hosted by others are just a lot more convenient. You don’t worry about the infrastructure, you just push your code. People like me might get more excited about setting up than the actual coding. It’s the bane of half my projects - gotta get that git workflow in place, think long-term, set up the “mandatory PR with tests before merge” and shit until eventually I have everything set up… and the spark of the original script I wanted to do is gone.

    If you want to focus on coding, the benefit of having a ready setup are hard to dismiss.
    On the other hand, setting up and configuring a server can be a one-time job, so if that’s worth it to you, power to you!


  • I asked for a rough description because I didn’t wanna bother anyone to take the time for a full, detailed explanation…

    …then you come along and write a whole article on it that’s most certainly more informative and useful than anything Google would have spat out.

    I love that. Thanks so much for taking the time. I also think I’ll give Bazzite / Fedora Atomic a shot. The idea of simply rebasing onto a different option to try different things is definitely appealing.





  • luciferofastora@lemmy.ziptoLinux@lemmy.mlCrapped my system
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    2 months ago

    I’ve had good luck striking out on a new path with Nobara after years of only ever using Ubuntu. There was a bit of a learning curve (and I still haven’t gotten everything I wanted to work the way it did before), but I mostly got it figured out.

    But that may well be a Survivor case in the sense of Survivor Bias, no idea how many people tried and decided “wasn’t worth it”.

    I did have a bone to pick with pipewire because my old pulseaudio config no longer worked and I had difficulties figuring out just how to redo it in pw, but that’s probably not distro-specific.


  • As someone on the outskirts of Data Science, probably something along the lines of “Just what the fuck does my customer actually need?”

    You can’t throw buzzwords and a poorly labeled spreadsheet at me and expect me to go deep diving into a trashheap of data to magically pull a reasonable answer. “Average” has no meaning if you don’t give me anything to average over. I can’t tell you what nobody has ever recorded anywhere, because we don’t have any telepathic interfaces (and probably would get in trouble with the worker’s council if we tried to get one).

    I’m sure there are many interesting questions to be debated in this field, but on the practical side, humans remain the greatest mystery.