

Then how would they sell access in a deniable way?
Then how would they sell access in a deniable way?
Ah. Yeah, nuke it from orbit. Since this was RAT, so it had local execution powers and the attackers knew exactly which distro they are targetting, they could have used some security vulnerability to get root and even replace the kernel in worst case. Hopefully not microcode insertion, so hardware could be ok
But then, it wasn’t an attack on an existing package. So the question is how many people did actually download those
Ah, you mean for fediverse to work as an LDAP?
My point is Let’s imagine we have a board on some instance. You use your account on another instance to ask the owner of the board to give you access to the board. The contents of the board are, IMO in most cases of such boards, “members only”. So any changes happening inside should not be sent out to federating instances. Otherwise, privacy of such boards would be at the mercy of privacy of other instances. If restricted changes were sent out, technically speaking, any server it federates to can choose to show that content to everyone. Which means you won’t be able to access the contents via any other instance. Apart from the logging in part, you will still need to go to the instance hosting the board. Unless it would be for publicly accessible boards only, like codeberg issues. That use-case could work
It’s all spiraling out of control
Beam me up, Scotty!
curl looks fine too
Hm. I just clicked it in Firefox and got a response JSON. A good response
So, a guy sheds some light on what’s going on, the accused name drops him, so now the guy has to be protected. Whatever he could have caught on during the trials, he is no longer in position to keep his ear to the ground. It seems the protection, while successfully did manage to protect him - which is good, in a way also worked as a blocker so he doesn’t do more damage during time critical for the group
It was AUR. The way AUR works is that there is a PKGBUILD file that tells pacman how to compile a package from scratch. It can be created in a way where nothing gets compiled, only precompiled binary is downloaded (like from github releases). So it was not a package in purely Arch sense. With those PKGBUILDs out from AUR, malicious binaries only sit on their github, or wherever those were hosted, and are not reachable via alternative package managers (pacman, the official one, doesn’t offer AUR at all)
Yes, but that was in the OP. Maybe normal disk is not feasible for some reason
no graphics card whatsoever
computer can play h.265 and equivalent without troubles, provided video file is no higher than 1080 p.
Computer can play av1 files no higher than 1080 p only if I shut every other application down. If for example I run a browser and an av1 file with either mpv or vlc, system shuts down.
Can I put all that memory to use and avoid overloading the cpu?
Most of the answers seem to focus on the main problem, but your question got me thinking.
Since you are not getting shutdowns with lower qualities, maybe you could use RAM to play those videos.
Set up tmpfs. Before you start all the other things, use ffmpeg to recode the video to something without any compression, maybe tell it to not work too fast (like work on one frame at a time), and put the thing on that tmpfs. Maybe then playing this new file would be less demanding. The key would be to not force it to provide 30fps of encoded video
Although… Are you sure all this RAM is fine? Maybe it shuts down on more demanding videos because with those the RAM usage raises to the faulty part?
With that flow, those might have counted more as remixes than copies ;D
$ sh
sh-5.2$ echo dfgsdfgfd |& tee /tmp/t
dfgsdfgfd
sh-5.2$ cat /tmp/t
dfgsdfgfd
sh-5.2$
¯_(ツ)_/¯
True
But nowadays /bin/sh
is often just a link to bash
https://www.man7.org/linux/man-pages/man1/bash.1.html
Pipelines A pipeline is a sequence of one or more commands separated by one of the control operators | or |&. The format for a pipeline is:
[time [-p]] [ ! ] command1 [ [|⎪|&] command2 … ]
(…) If |& is used, command1’s standard error, in addition to its standard output, is connected to command2’s standard input through the pipe; it is shorthand for 2>&1 |. This implicit redirection of the standard error to the standard output is performed after any redirections specified by command1.
How would you see it work? IMO such boards are mostly for personal/organisation use, not a social space
For &1
to work don’t we need to be using some shell anyway?
IMO |& tee dirlist
is easier to manage
Oooh. Can’t wait to test this out