• 4 Posts
  • 1.27K Comments
Joined 2 years ago
cake
Cake day: August 21st, 2024

help-circle

  • all of that is correct and also basically what i said. the reason it became a big deal was because of accessibility. the definition of a “long press” is short enough that older users, who tend to hold the mouse button down for longer after a click, were suddenly seeing popup windows everywhere and, believing it to be an issue of the site they were on, assumed their popup blocker was broken. the timing was adjusted to one second in an update and also the long press shortcut was made optional.

    when it was pointed out to mozilla that having a popup containing one big blue button in a popup with fancy graphics around it might also be harking back to the popup ads of yore, and that it might compel people to click on the only visible button on instinct, their head of firefox did an interview with pc world where he countered this with, and i’m paraphrasing here, “nuh-uh”. this was in reference to an ama they did where several interaction experts weighed in with frankly pretty standard stuff: don’t surprise the user, don’t shove things in their face, don’t draw attention needlessly.

    for reference, here is the popup post-fixing:

    before they pushed an update the button didn’t say “continue”, it said “Summarize with AI ✨” and didn’t have a “cancel” option.



















  • one of my most recent fun activities came from discovering the “allow editing” button in koboldcpp. since the model is fed the entire conversation so far as its only context, and doesn’t save data between iterations, you can basically re-write its memory on the fly. i knew this before but i’d never though to do it until there was an easy ui option for it, and it turned out to be a lot of fun, because when using a “thinking” model like qwen3.5 you can convince it that it’s bypassing its own censorship.

    basically you give the model a prompt to work off of, pause it in the middle of the thinking process, change previous thoughts to something it’s been trained to filter out (like sex or violence or opinions critical of the ccp), and it will start second-guessing itself. sometimes it gets stuck in a loop, sometimes it overcomes the contradiction (at which point you can jump in again and tweak its memory some more) and sometimes it gets tied up in knots trying to prove a negative.

    a previous experiment was about feeding stable diffusion images back into itself to see what happens. i was inspired by a talk at 37c3 where they demonstrated model collapse by repeatedly trying to generate the same image as they put in (i think this was how sora worked).