• 0 Posts
  • 17 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle

  • That’s not what lossless data compression schemes do:
    In lossless compression the general idea is to create a codebook of commonly occuring patterns and use those as shorthand.
    For example, one of the simplest and now ancient algorithms LZW does the following:

    • Initialize the dictionary to contain all strings of length one.
    • Initialize the dictionary to contain all strings of length one.
    • Emit the dictionary index for W to output and remove W from the input.
    • Add W followed by the next symbol in the input to the dictionary.
    • repeat
      Basically, instead of rewriting long sequences, it just writes down the index into an existing dictionary of already seen sequences.

    However, once this is done, you now need to find an encoding that takes your characterset (the original characters+the new dictionary references) and turns it into bits.
    It turns out that we can do this optimally: Using an algorithm called Arithmetic coding we can align the length of a bitstring to the amount of information it contains.
    “Information” here meaning the statistical concept of information, which depends on the inverse likelihood a certain character is observed.
    Logically this makes sense:
    Let’s say you have a system that measures earthquakes. As one would expect, most of the time, let’s say 99% of the time, you will see “no earthquake”, while in 1% of the cases you will observe “earthquake”.
    Since “no earthquake” is a lot more common, the information gain is relatively small (if I told you “the system said no earthquake”, you could have guessed that with 99% confidence: not very surprising).
    However if I tell you “there is an earthquake” this is much more important and therefore is worth more information.

    From information theory (a branch of mathematics), we know that if we want to maximize the efficiency of our codec, we have to match the length of every character to its information content. Arithmetic coding now gives us a general way of doing this.

    However, we can do even better:
    Instead of just considering individual characters, we can also add in character pairs!
    Of course, it doesn’t make sense to add in every possible character pair, but for some of them it makes a ton of sense:
    For example, if we want to compress english text, we could give a separate codebook entry to the entire sequence “the” and save a ton of bits!
    To do this for pairs of characters in the english alphabet, we have to consider 26*26=676 combinations.
    We can still do that: just scan the text 600 times.
    With 3 character combinations it becomes a lot harder 26*26*26=17576 combinations.
    But with 4 characters its impossible: you already have half a million combinations!
    In reality, this is even worse, since you have way more than 26 characters: you have things like ", . ? ! and your codebook ids which blow up the size even more!

    So, how are we supposed to figure out which character pairs to combine and how many bits we should give them?
    We can try to predict it!
    This technique, called [PPM](Prediction by partial matching) is already very old (~1980s), but still used in many compression algorithms.
    The important trick is now that with deep learning, we can train even more efficient estimators, without loosing the lossless property:
    Remember, we only predict what things we want to combine, and how many bits we want to assign to them!
    The worst-case scenario is that your compression gets worse because the model predicts nonsensical character-combinations to store, but that never changes the actual information you store, just how close you can get to the optimal compression.

    The state-of-the-art in text compression already uses this for a long time (see Hutter Prize) it’s just now getting to a stage where systems become fast and accurate enough to also make the compression useful for other domains/general purpose compression.




  • Not really: you have to keep in mind the amount of expertise and ressources that already went into silicon, as well as the geopolitics and sheer availability of silicon. The closest currently available competitor is probably gallium arsenide. That has a couple of disadvantages compared to silicon

    • It’s more expensive (both due to economies of scale and the fact that silicon is just much more abundant in general)
    • GaAs crystals are less stable, leading to smaller boules.
    • GaAs is a worse thermal conductor
    • GaAs has no native “oxide” (compare to SiO₂) which can be directly used as an insulator
    • GaAs mobilities are worse (Si is 500 vs GaAs 400), which means P channel FETs are naturally slower in GaAs, which makes CMOS structures impossible
    • GaAs is not a pure element, which means you get into trouble with mixing the elements
      You usually see GaAs combined with germanium substrates for solar panels, but rarely independently of that (GaAs is simply bad for logic circuits).
      In short: It’s not really useful for logic gates.

    Germanium itself is another potential candidate, especially since it can be alloyed with silicon which makes it interesting from an integration point-of-view.
    SiGe is very interesting from a logic POV considering its high forward and low reverse gain, which makes it interesting for low-current high-frequency applications. Since you naturally have heterojunctions which allow you to tune the band-gap (on the other hand you get the same problem as in GaAs: it’s not a pure element so you need to tune the band-gap).
    One problem specifically for mosfets is the fact that you don’t get stable silicon-germanium oxides, which means you can’t use the established silicon-on-insulator techniques.
    Cost is also a limiting factor: before even starting to grow crystals you have the pure material cost, which is roughly $10/kg for silicon, and $800/ kg for germanium.
    That’s why, despite the fact that the early semiconductors all relied on germanium, germanium based systems never really became practical: It’s harder to do mass production, and even if you can start mass production it will be very expensive (that’s why if you do see germanium based tech, it’s usually in low-production runs for high cost specialised components)

    There’s some research going on in commercialising these techniques but that’s still years away.




  • ZickZack@kbin.socialtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    arrow-up
    28
    arrow-down
    2
    ·
    1 year ago

    They will make it open source, just tremendously complicated and expensive to comply with.
    In general, if you see a group proposing regulations, it’s usually to cement their own positions: e.g. openai is a frontrunner in ML for the masses, but doesn’t really have a technical edge against anyone else, therefore they run to congress to “please regulate us”.
    Regulatory compliance is always expensive and difficult, which means it favors people that already have money and systems running right now.

    There are so many ways this can be broken in intentional or unintentional ways. It’s also a great way to detect possible e.g. government critics to shut them down (e.g. if you are Chinese and everything is uniquely tagged to you: would you write about Tiananmen square?), or to get monopolies on (dis)information.
    This is not literally trying to force everyone to get a license for producing creative or factual work but it’s very close since you can easily discriminate against any creative or factual sources you find unwanted.

    In short, even if this is an absolutely flawless, perfect implementation of what they want to do, it will have catastrophic consequences.


  • The “adequate covering” of our distribution p is also pretty self-explanatory: We don’t need to see the statement “elephants are big” a thousand times to learn it, but we do need to see it at least once:

    Think of the p distribution as e.g. defining a function on the real numbers. We want to learn that function using a finite amount of samples. It now makes sense to place our samples at interesting points (e.g. where the function changes direction), rather than just randomly throwing billions of points against the problem.

    That means that even if our estimator is bad (i.e. it can barely distinguish real and fake data), it is still better than just randomly sampling (e.g. you can say “let’s generate 100 samples of law, 100 samples of math, 100 samples of XYZ,…” rather than just having a big mush where you hope that everything appears).
    That makes a few assumptions: the estimator is better than 0% accurate, the estimator has no statistical bias (e.g. the estimator didn’t learn things like “add all sentences that start with an A”, since that would shift our distribution), and some other things that are too intricate to explain here.

    Importantly: even if your estimator is bad, it is better than not having it. You can also manually tune it towards being a little bit biased, either to reduce variance (e.g. let’s filter out all HTML code), or to reduce the impact of certain real-world effects (like that most stuff on the internet is english: you may want to balance that down to get a more multilingual model).

    However, you have not note here that these are LANGUAGE MODELS. They are not everything models.
    These models don’t aim for factual accuracy, nor do they have any way of verifying it: That’s simply not the purview of these systems.
    People use them as everything models, because empirically there’s a lot more true stuff than nonsense in those scrapes and language models have to know something about the world to e.g. solve ambiguity, but these are side-effects of the model’s training as a language model.
    If you have a model that produces completely realistic (but semantically wrong) language, that’s still good data for a language model.
    “Good data” for a language model does not have to be “true data”, since these models don’t care about truth: that’s not their objective!
    They just complete sentences by predicting the next token, which is independent of factuallity.
    There are people working on making these models more factual (same idea: you bias your estimator towards more likely to be true things, like boosting reliable sources such as wikipedia, rather than training on uniformly weighted webscrapes), but to do that you need a lot more overview over your data, for which you need more efficient models, for which you need better distributions, for which you need better estimators (though in that case they would be “factuallity estimators”).
    In general though the same “better than nothing” sentiment applies: if you have a sampling strategy that is not completely wrong, you can still beat completely random sample models. If your estimator is good, you can substantially beat them (and LLMs are pretty good in almost everything, which means you will get pretty good samples if you just sample according to the probability that the LLM tells you “this data is good”)

    For actually making sure that the stuff these models produce is true, you need very different systems that actually model facts, rather than just modelling language. Another way is to remove the bottleneck of machine learning models with respect to accuracy (i.e. you build a model that may be bad, but can never give you a wrong answer):
    One example would be vector-search engines that, like search engines, retrieve information from a corpus based on the similarity as predicted by a machine learning model. Since you retrieve from a fixed corpus (like wikipedia) the model will never give you wrong information (assuming the corpus is not wrong)! A bad model may just not find the correct e.g. wikipedia entry to present to you.


  • Yes: keep in mind that with “good” nobody is talking about the content of the data, but rather how statistically interesting it is for the model.

    Really what machine learning is doing is trying to deduce a probability distribution q from a sampled distribution x ~ p(x).
    The problem with statistical learning is that we only ever see an infinitesimally small amount of the true distribution (we only have finite samples from an infinite sample space of images/language/etc…).

    So now what we really need to do is pick samples that adequately cover the entire distribution, without being redundant, since redundancy produces both more work (you simply have more things to fit against), and can obscure the true distribution:
    Let’s say that we have a uniform probability distribution over [1,2,3] (uniform means everything has the same probability of 1/3).

    If we faithfully sample from this we can learn a distribution that will also return [1,2,3] with equal probability.
    But let’s say we have some redundancy in there (either direct duplicates, or, in the case of language, close-to duplicates):
    The empirical distribution may look like {1,1,1,2,2,3} which seems to make ones a lot more likely than they are.
    One way to deal with this is to just sample a lot more points: if we sample 6000 points, we are naturally going to get closer to the true distribution (similar how flipping a coin twice can give you 100% tails probability, even if the coin is actually fair. Once you flip it more often, it will return to the true probability).

    Another way is to correct our observations towards what we already know to be true in our distribution (e.g. a direct 1:1 duplicate in language is presumably a copy-paste rather than a true increase in probability for a subsequence).

    <continued in next comment>


  • That paper makes a bunch of(implicit) assumptions that make it pretty unrealistic: basically they assume that once we have decently working models already, we would still continue to do normal “brain-off” web scraping.
    In practice you can use even relatively simple models to start filtering and creating more training data:
    Think about it like the original LLM being a huge trashcan in which you try to compress Terrabytes of mostly garbage web data.
    Then, you use fine-tuning (like the instruction tuning used the assistant models) to increases the likelihood of deriving non-trash from the model (or to accurately classify trash vs non-trash).
    In general this will produce a datasets that is of significantly higher quality simply because you got rid of all the low-quality stuff.

    This is not even a theoretical construction: Phi-1 (https://arxiv.org/abs/2306.11644) does exactly that to train a state-of-the-art language model on a tiny amount of high quality data (the model is also tiny: only half a percent the size of gpt-3).
    Previously tiny stories https://arxiv.org/abs/2305.07759 showed something similar: you can build high quality models with very little data, if you have good data (in the case of tiny stories they generate simply stories to train small language models).

    In general LLM people seem to re-discover that good data is actually good and you don’t really need these “shotgun approach” web scrape datasets.


  • It really depends on what you want: I really like obsidian which is cross-platform and uses basically vanilla markdown which makes it easy to switch should this project go down in flames (there are also plugins that add additional syntax which may not be portable, but that’s as expected).

    There’s also logseq which has much more bespoke syntax (major extensions to markdown), but is also OSS meaning there’s no real danger of it suddenly vanishing from one day to the next.
    Specifically Logseq is much heavier than obsidian both in the app itself and the features it adds to markdown, while obsidian is much more “markdown++” with a significant part of the “++” coming from plugins.

    In my experience logseq is really nice for short-term note taking (e.g. lists, reminders, etc) and obsidian is much nicer for long-term notes.

    Some people also like notion, but i never got into that: it requires much more structure ahead of time and is very locked down (it also obviously isn’t self-hosted). I can see notion being really nice for people that want less general note-taking and more custom “forms” to fill out (e.g. traveling checklists, production planning, etc…).

    Personally, I would always go with obsidian, just for the piece of mind that the markdown plays well with other markdown editors which is important for me if I want a long-running knowledge base.
    Unfortunately I cannot tell you anything with regards to collaboration since I do not use that feature in any note-taking system


  • For example, if you had an 8-bit integer represented by a bunch of qbits in a superposition of states, it would have every possible value from 0-256 and could be computed with as though it were every possible value at once until it is observed, the probability wave collapses, and a finite value emerges. Is this not the case?

    Not really, or at least it’s not a good way of thinking about it. Imagine it more like rigging coin tosses: You don’t have every single configuration at the same time, but rather you have a joint probability over all bits which get altered to produce certain useful distributions.
    To get something out, you then make a measurement that returns the correct result with a certain probability (i.e. it’s a probabilistic turing machine rather than a nondeterministic one).

    This can be very useful since sampling from a distribution can sometimes be much nicer than actually solving a problem (e.g. you replace a solver with a simulator of the output).
    In traditional computing this can also be done but that gives you the fundamental problem of sampling from very complex probability distributions which involves approximating usually intractable integrals.

    However, there are also massive limitations to the type of things a quantum computer can model in this way since quantum theory is inherently linear (i.e. no climate modelling regardless of how often people claim they want to do it).
    There’s also the question of how many things exist where it is more efficient to build such a distribution and sample from it, rather than having a direct solver.
    If you look at the classic quantum algorithms (e.g. https://en.wikipedia.org/wiki/Quantum_algorithm), you can see that there aren’t really that many algorithms out there (this is of course not an exhaustive list but it gives a pretty good overview) where it makes sense to use quantum computing and pretty much all of them are asymptotically barely faster or the same speed as classical ones and most of them rely on the fact that the problem you are looking at is a black-box one.

    Remember that one of the largest useful problems that was ever solved on a quantum computer up until now was factoring the number 21 with a specialised version of Shor’s algorithm that only works for that number (since the full shor would need many orders of magnitude more qbits than exist on the entire planet).

    There’s also the problem of logical vs physical qbits: In computer science we like to work with “perfect” qbits that are mathematically ideal, i.e. are completely noise free. However, physical qbits are really fragile and attenuate to pretty much anything and everything, which adds a lot of noise into the system. This problem also gets worse the larger you scale your system.

    The latter is a fundamental problem: the entire clue of quantum computers is that you can combine random states to “virtually” build a complex distribution before you sample from it. This can be much faster since the virtual model can look dependencies that are intractable to work with on a classical system, but that dependency monster also means that any noise in the system is going to negatively affect everything else as you scale up to more qbits.
    That’s why people expect real quantum computers to have many orders of magnitude more qbits than you would theoretically need.

    It also means that you cannot trivially scale up a physical quantum algorithm: Physical grovers on a list with 10 entries might look very different than a physical grover with 11 entries.
    This makes quantum computing a nonstarter for many problems where you cannot pay the time it takes to engineer a custom solution.
    And even worse: you cannot even test whether your fancy new algorithm works in a simulator, since the stuff you are trying to simulate is specifically the intractable quantum noise (something which, ironically, a quantum computer is excellent at simulating).

    In general you should be really careful when looking at quantum computing articles, since it’s very easy to build some weird distribution that is basically impossible for a normal computer to work with, but that doesn’t mean it’s something practical e.g. just starting the quantum computer, “boop” one bit, then waiting for 3ns will give you a quantum noise distribution that is intractable to simulate with a computer (same thing is true if you don’t do anything with a computer: there’s literal research teams of top scientists whose job boils down to “what are quantum computers computing if we don’t give them instructions”).

    Meanwhile, the progress of classical or e.g. hybrid analog computing is much faster than that of quantum computing, which means that the only people really deeply invested into quantum computing are the ones that cannot afford to miss, just in case there is in fact something:

    • finance
    • defence
    • security



  • While the inability to source is a huge problem, but you also have to keep in mind that complaining about AI has other objective beyond the obvious “AI bad”.

    • it’s marketing: “Our thing is so powerful it could irreparably change someone’s life” is still advertising even if that irreparable change is bad. Saying “AI so powerful it’s dangerous” just sounds less advertis-y than “AI so powerful you cannot not invest in it” despite both leading to similar conclusions (you can look back at the “fearvertising” done during the original AI boom: same paint, different color)
    • it’s begging for regulatory zeals to be put into place: Everyone with a couple of millions can build an LLM from scratch. That might sound like a lot, but it’s only getting cheaper and it doesn’t need highly intricate systems to replicate. Specifically the ability to finetune a large model with few datapoints allows even open-source non-profits like OpenAssistant to compete against the likes of google and openai: Google has made that very explicit in their leaked We have no moat memo. This is why you see people like Sam Altman talking to congress about the dangers of AI: He has no serious competetive advantage and hopes that with sufficient fear-mongering he can get the government to give him one.

    Complaining about AI is as much about the AI as it is about the economical incentives behind AI.