• 85 Posts
  • 723 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle




  • j4k3@lemmy.worldtoLemmy.World Announcements@lemmy.worldLooking for new Site Admins
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    12
    ·
    edit-2
    5 days ago

    I’ve been pushing positivity since the beginning before the first 1.5k joined this server. https://lemmy.world/post/36032

    I would do it, and have the time, but I don’t think you guys meet my requirements to make it manageable. I don’t use proprietary software, and use a whitelist firewall with addresses I know and trust, like this server. While I can spin up a secondary network or even a Windows machine, I do not care to do so at all, and certainly not regularly. However, I’m basically at a computer all the time anyways.

    What’s up with the haters? Me, or something I said?



  • What do you think it would take to make this vote tally thing automated by a bot? Like ideally one could set up the whole thing to tally, send a few direct messages around and fall back to defaults if there is no reply set within a given amount of time. I’m not criticising anyone here. From my perspective after hosting once, the whole tally and generation of the next challenge is a bit of a drag where automaton and routine would help improve participation IMO.


  • Invisibly; by trying to post in it and encourage others to do so. There is not much management to do with such a small community. The majority of regular users watch the All feed, so subscriptions are really just a way to bookmark the community to post in it or find it more easily. For smaller or new communities, expect it to be more like your personal blog as it is unlikely to be something others will post in regularly. The majority of communities that are hourly-active were made prior to the rexodus of June 2023, or within a few weeks thereafter.

    Unless you’re in a very controversial space, actively micromanaging a community is likely an issue with the mod not the community IMO. The admins take care of the majority of wack-a-mole nonsense here.


  • How are finishes so durable and thin?

    My assumption of a lack of post processes is because I come from a background of automotive refinishing and repair, where I’ve owned a shop and painted for many years along with getting into custom art graphics and airbrushing. The only finishes I know of that provide a similar durability are two part urethanes. Those are far too thick by comparison. When cutting into plastics that have been moulded, the finish shows no signs of mechanical layering or bonding like a post process finish in most cases. Often a cleanly broken or cut part shows a similar type of penetrating surface alteration I associate with a polishing operation, where the surface transitions in color and grain structure with in millimeter or few (in cases where the break is clean and does not appear to be influenced by stress alterations like ABS where it whitens under tension).

    How does chromate conversion work with a prep regime and what kind of wet paint can offer similar durability to a 2k urethane when it is impossibly thin? Like I know the limitations of urethane well when it comes to corners and pointy bits where it will thin from surface tension. There is not a chance in hell that the buttons on the side of my phone could be painted with such a finish with an even conformal coating and remain durable for years of constant abrasion. Is there a name for this class and type of finish? Where are they sourced? What is the scale of the industry? Is there a way to access the process and products at a small scale?




  • Slowly trying to learn sh while using mostly bash. Convenience is nice and all, but when I encounter something like OpenWRT or Android, I don’t like the feeling of speaking a foreign language. Maybe if I can get super familiar with sh, then I might explore prettier or more convenient options, but I really want to know how to deal with the most universal shell.


  • Yeah, but depends on a person’s goals. I don’t mind being doxed. The privacy thing I’m really concerned about is manipulation of data related to the host server; apps that are used like data loggers of sensors; tracking dwell time; page views; likes, blocks, etc. I care far less about what I say to others in public. I vehemently claim that owning the data about any individual is theft of autonomy, failure of democracy and government, and a form of slavery if one plays out the total philosophical circumstance and implications. Anyone that holds such data about someone else with the intent to manipulate in any way whatsoever is a criminal. I’ve been a Buyer for a retail chain, collected and analysed tons of customer data. This has nothing to do with how data is collected and used now, but this is used as justification for the present criminal data manipulation industry.

    As a disabled person, I need to connect with humans more, and as much as I can here. I totally respect those of you that have other priorities that limit your conversational topics of interest, and I don’t wish to violate those. This place is just my version of a public square, where I’m trying to make general conversation. -warmly



  • So Flash memory works in blocks called pages. The pages contain a header that ends in a few bytes that says what the rest of the page maps to.

    If the file was encrypted, you’re probably SOL. If it was not encrypted it may be possible to to recover some parts of the files. This is extremely advanced level data recovery. I only know the abstract basic principals and would likely struggle to figure this out and recover my own stuff if I ever needed to do this. I’ve only programmed microcontrollers and flash memory devices.

    A micro SD card contains a small microcontroller and some blocks of flash memory, although the microcontroller is transparent to the user and operating system… unless hacking with needle probes in a lab.

    So here’s the basics. Writing flash involves taking an entire Page of memory and zeroing it first. There is a tiny voltage booster circuit on the card that allows the page to get pulsed up and down in voltage a few times in order to completely zero the entire page without any remaining residuals. Once this is done and the entire page has been zeroed, only then is it possible to write the data into the bytes of the page.

    If you want to change a single byte level value in an address that already contains a value, first the entire page is copied to a blank page in another location, then the old page is pulsed a few times, then each value is transferred back into the old page except that the new value that needed to be changed is now set to the new values.

    This is the proper way to write flash at a basic level. If the power is lost in the middle of this cycle, the worst case scenario is that the new updated value was not written. The page in question should never be “missing” because the header record should always point to either the original or copied page. One of the two should always be present and complete… in a proper setup. Obviously, it might be faster to simply use some RAM to hold the page, erase the old page and rewrite it. I have no idea what size pages are in modern SD cards, but on hobby class microcontrollers I have used the pages were 4096 bytes, IIRC. My understanding is that most SD cards use an 8051 clone micro, so it is probably a similar size.

    So here’s the thing, the bulk of the data is always there. Somewhere deep down inside you likely already knew this. It is why you’re supposed to overwrite an entire drive instead of the “quick” erase in most formatting tools. The quick erase is simply deleting a tiny header file that says what exists where on the drive. Similarly, some part of your SD card there is a page or few where the header has been screwed up. Your OS is looking at this header info and seeing a mismatch of garbled junk and saying f-that bs.

    Generally, recovery would involve dumping the raw contents of the flash memory as hexadecimal, being super familiar with what you’re looking at and knowing how to find the page that is causing the error. Generally I assume you’d need to replace the bad page with a good header and it would then work. There are services for this kind of operation; data recovery. In practice, this has a few more layers of complication. Pages can be placed in different locations that enable wear leveling so one area of memory is not over utilized. There is also a table of bad blocks/pages that the micro knows to skip, and there is usually a bit or address in the page that is used to detect errors that may have occurred.

    This is pretty much everything I know on the subject. Hopefully it helps you understand the abstract nature of what is happening. In the simplest of terms, flash memory is like writing a long essay with an ink pen and where you can not make mistakes or use whiteout. If you need to make a change, you must write out the entire page all over again. This process is what is so time critical that you must “eject” the drive.


  • Diffusion models do not parse natural language like a LLM. All behavior that appears like NLP is illusionary. For the most part. You can get away with some things because of what is present in the training corpus. However, any time you use a noun, you are making a weighted image priority. By repeating “shuttle” in this prompt, you’ve heavily biased to feature the shuttle regardless of the surrounding context. It is not contextualising, it is ‘word weighting’. There is a relationship to the other words of the prompt, but they are not conceptually connected.

    In a LLM there are special tokens that are used to dynamically ensure that the key points of the input are connected to the output, but this system is not present in generative AI.

    To illustrate, I like to download LoRA’s to use on offline models, I use a few tools to probe them and determine how they were made, like the tags used with training images, what base model was used, and the training settings they used. Around a third of LoRA’s I have downloaded contain natural language in the images that were tagged. This means the LoRA related term I use for generating should be done with natural language.

    This is the same principal required for any model. You should always ask yourself, how often is this terminology occurring in the tags below an image. You might check out gelbooru or danbooru just to have a look at the tags system used there for all images. That is very similar to how training happens for the vast majority of imagery. It is very simplified overall.

    The negative prompt is very different in how it is processed compared to the positive. If you look at the respective documentation for the tool you’re using, they might make some syntax available to create a negative line, but they likely want you to use their API with a more advanced tool.


  • MIPS is Stanford’s alternative architecture to Berkeley’s RISC-I/RISC-II. I was somewhat concerned about their stuff in routers, especially when the primary bootloader used is proprietary.

    The person that wrote the primary bootloader, is the same person writing most of the Mediatek kernel code in mainline. I forget where I put together their story, but I think they were some kind of prodigy type that reverse engineered and wrote an entire bootloader from scratch, implying a very deep understanding of the hardware. IIRC I may have seen that info years ago in the uboot forum. I think someone accused the mediatek bootloader of copying uboot. Again IIRC, their bootloader was being developed open source and there is some kind of partially available source still on a git somewhere. However, they wound up working for Mediatek and are now doing all the open source stuff. I found them on the OpenWRT and was a bit of an ass asking why they didn’t open source the bootloader code. After that, some of the more advanced users on OpenWRT explained to me how the bootloader is static, which I already kinda knew, I mean, I know it is on a flash memory chip on the SPI bus. This makes it much easier to monitor the starting state and what is really happening. These systems are very old 1990’s era designs, there is not a lot of room to do extra stuff unnoticed.

    On the other hand, all cellular modems are completely undocumented, as are all WiFi modems since the early 2010’s, with the last open source WiFi modem being the Atheros chips.

    There is no telling what is happening with cellular modems. I will say, the integrated nonremovable batteries have nothing to do with design or advancement. They are capable monitoring devices that cannot be turned off.

    However, if we can monitor all registers in a fully documented SoC, we can fully monitor and control a peripheral bus in most instances.

    Overall, I have little issue with Mediatek compared to Qualcomm. They are largely emulating the behavior of the bigger player, Broadcom.


  • The easiest ways to distinguish I’m human are the patterns as, others have mentioned, assuming you’re familiar with the primary Socrates entity’s style in the underlying structure of the LLM. The other easy way to tell I’m human is my conceptual density and mobility when connecting concepts across seemingly disconnected spaces. Presently, the way I am connecting politics, history, and philosophy to draw a narrative about a device, consumers, capitalism, and venture capital is far beyond the attention scope of the best AI. No doubt the future will see AI rise an order of magnitude to meet me, but that is not the present. AI has far more info available, but far less scope in any given subject when it comes to abstract thought.

    The last easy way to see that I am human is that I can talk about politics in a critical light. Politics is the most heavily bowdlerized space in any LLM at present. None of the models can say much more than gutter responses that are form like responses overtrained in this space so that all questions land on predetermined replies.

    I play with open source offline AI a whole lot, but I will always tell you if and how I’m using it. I’m simply disabled, with too much time on my hands, and y’all are my only real random humans interactions. - warmly

    I don’t fault your skepticism.


  • All their hardware documentation is locked under NDA nothing is publicly available about the hardware at the hardware registers level.

    For instance, the base Android system AOSP is designed to use Linux kernels that are prepackaged by Google. These kernels are well documented specifically for manufacturers to add their hardware support binary modules at the last possible moment in binary form. These modules are what makes the specific hardware work. No one can update the kernel on the device without the source code for these modules. As the software ecosystem evolves, the ancient orphaned kernel creates more and more problems. This is the only reason you must buy new devices constantly. If the hardware remained undocumented publicly while just the source code for modules present on the device was merged with the kernel, the device would be supported for decades. If the hardware was documented publicly, we would write our own driver modules and have a device that is supported for decades.

    This system is about like selling you a car that can only use gas that was refined prior to your purchase of the vehicle. That would be the same level of hardware theft.

    The primary reason governments won’t care or make effective laws against orphaned kernels is because the bleeding edge chip foundries are the primary driver of the present economy. This is the most expensive commercial endeavor in all of human history. It is largely funded by these devices and the depreciation scheme.

    That is both sides of the coin, but it is done by stealing ownership from you. Individual autonomy is our most expensive resource. It can only be bought with blood and revolutions. This is the primary driver of the dystopian neofeudalism of the present world. It is the catalyst that fed the sharks that have privateered (legal piracy) healthcare, home ownership, work-life balance, and democracy. It is the spark of a new wave of authoritarianism.

    Before the Google “free” internet (ownership over your digital person to exploit and manipulate), all x86 systems were fully documented publicly. The primary reason AMD exists is because we (the people) were so distrusting over these corporations stealing and manipulating that governments, militaries, and large corporations required second sourcing of chips before purchasing with public funds. We knew that products as a service - is a criminal extortion scam, way back then. AMD was the second source for Intel and produced the x86 chips under license. It was only after that when they recreated an instructions compatible alternative from scratch. There was a big legal case where Intel tried to claim copyright over their instruction set, but they lost. This created AMD. Since 2012, both Intel and AMD have proprietary code. This is primarily because the original 8086 patents expired. Most of the hardware could be produced anywhere after that. In practice there are only Intel, TSMC, and Samsung on bleeding edge fab nodes. Bleeding edge is all that matters. The price is extraordinary to bring one online. The tech it requires is only made once for a short while. The cutting edge devices are what pays for the enormous investment, but once the fab is paid for, the cost to continue running one is relatively low. The number of fabs within a node is carefully decided to try and accommodate trailing edge node demand. No new trailing edge nodes are viable to reproduce. There is no store to buy fab node hardware. As soon as all of a node’s hardware is built by ASML, they start building the next node.

    But if x86 has proprietary, why is it different than Qualcomm/Broadcom - no one asked. The proprietary parts are of some concern. There is an entire undocumented operating system running in the background of your hardware. That’s the most concerning. The primary thing that is proprietary is the microcode. This is basically the power cycling phase of the chip, like the order that things are given power, and the instruction set that is available. Like how there are not actual chips designed for most consumer hardware. The dies are classed by quality and functionality and sorted to create the various products we see. Your slower speed laptop chip might be the same as a desktop variant that didn’t perform at the required speed, power is connected differently, and it becomes a laptop chip.

    When it comes to trending hardware, never fall for the Apple trap. They design nice stuff, but on the back end, Apple always uses junky hardware, and excellent in house software to make up the performance gap. They are a hype machine. The only architecture that Apple has used and hasn’t abandoned because it went defunct is x86. They used MOS in the beginning. The 6502 was absolute trash compared to the other available processors. It used a pipeline trick to hack twice the actual clock speed because they couldn’t fab competitive quality chips. They were just dirt cheap compared to the competition. Then it was Motorola. Then Power PC. All of these are now irrelevant. The British group that started Acorn sold the company right after RISC-V passed the major hurtle of getting past Berkeley’s ownership grasp. It is a slow moving train, like all hardware, but ARM’s days are numbered. RISC-V does the same fundamental thing without the royalty. There is a ton of hype because ARM is cheap and everyone is trying to grab the last treasure chests they can off the slow sinking ship. In 10 years it will be dead in all but old legacy device applications. RISC-V is not a guarantee of a less proprietary hardware future, but ARM is one of the primary cornerstones blocking end user ownership. They are enablers for thieves; the ones opening your front door to let the others inside. Even the beloved raspberry pi is a proprietary market manipulation and control scheme. It is not actually open source at the registers level and it is priced to prevent the scale viability of a truly open source and documented alternative. The chips are from a failed cable TV tuner box, and they are only made in a trailing edge fab when the fab has no other paid work. They are barely above cost and a tax write off, thus the “foundation” and dot org despite selling commercial products.





  • It has a lot of potential if the T5 can be made conversational. After diving into a custom DPM adaptive sampler, there is a lot more specificity required. I believe the vast majority of people are not using the model with the correct workflow. Applying the old model workflows to SD3 makes garbage results. The 2 CLIPS models and the T5 need separate prompts, and the negative prompt needs an inverted channel with a slight delay before reintegration. I also think the smaller quantized version of the T5 is likely the primary problem overall. Any Transformer text model that small, that is them quantized to extremely small size is problematic.

    The license is garbage. The company is toxic. But the tool is more complex than most of the community seems to understand. I can generate a woman lying on grass in many intentional and iterative ways.