Geek Stuff category archive
So You Think Your Boot Is Secure? 0
Per security maven Bruce Schneier, it may well not be.
House of PCI Cards 0
Security maven Bruce Schneir takes a look at last week/s CrowdStrike computer failure and concludes that the potential for such an event is a feature, not a bug, fostered by the fee hand of the market.
Here’s a bit from his article (emphasis added):
Follow the link for the context of that comparison.
Screening into the Night 0
At Psychology Today Blogs, Dr. Lantie Elisabeth Jorandby reacts to Surgeon General’s recent proposal that “social” media should come with warning labels for young persons by looking at the ample evidence that “social” media isn’t. Here’s a small piece of her piece:
Given that the average daily social media use among that age group is now 4.8 hours, and that 3 hours a day is associated with twice the depression and anxiety risk, you have to ask: What effect could 4.8 hours a day have on young people?
The entire article is worth your whiles.
Twits Own Twitter X Offenders . . .
0
A fellow Tech Billionaire calls Elon Musk to task. A snippet:
“Hard for me to support someone with no values, lies, cheats, rapes, demeans women, hates immigrants like me,” Khosla replied. “He may cut my taxes or reduce some regulation but that is no reason to accept depravity in his personal values. Do you want President who will set back climate by a decade in his first year? Do you want his example for your kids as values?”
Our Surveillance Society 0
In related news, the EFF wants you to know that your car may be watching you.
Geeking Out 0
Mageia v. 9 with the Fluxbox window manager. The wallpaper is from my collection.
I’ve been using Mageia on various boxes since v. 3. I find it quite a nice piece of work.
It’s All about the Algorithm 0
At Psychology Today Blogs, Alain Samson takes a deep dive into how and why “clickbait” works, noting, among other things, that it
. . . exploits cognitive and emotional traits, triggering impulsive clicks
I must warn you that it’s not the best-written piece, but I think it still worth at least a skim. It might help you protect yourself against the nattering nabobs of nonsense trying to suck(er) you down their rabbit holes of rubbish.
The Crypto Con 0
Writing at the Washington Monthly, Graham Steele, expresses concern that Congress might fall for the Crypto Con. He notes that
Follow the link to find out why he thinks this bill is a not very good really really bad exceedingly stupid idea.
Aside:
If you want to learn more about the Crypto Con, check out Harry Shearer’s Le Show, where he has a regular feature on the “Crypto Winter.” (Click on an episode and you can see what topics are covered in that episode and what time they appear.)
It’s All about the Algorithm 0
At Psychology Today Blogs, Russell Ramsay reports on a study documenting how and why “social” media is not a reliable source of information. He notes that the study focused on TikTok (studies to have to focus, after all), but that its findings extend to “social” media in general.
In that context, he reminds us that
. . . social media platform algorithms are driven by popularity, not accuracy.
I commend the entire piece to your attention.
Artificial? Yes. Intelligent? Not So Much. 0
Methinks Atrios makes a telling point.
Artificial? Yes. Intelligent? Not So Much. 0
A threat to the polity? Most certainly.
Droning On 0
In a twist, the U. S. finds itself fighting new tech with an old law.
It’s All about the Algorithm 0
At Psychology Today Blogs, Jessica Koehler explores a number of ways in which “social” media isn’t. Here’s one of those ways (emphasis in the original):
Echo Chambers Leading to Confirmation Bias
Algorithms often create echo chambers by displaying content that aligns with users’ existing beliefs. This phenomenon reinforces negative thought patterns and contributes to polarized thinking. A report from the Proceedings of the National Academy of Sciences emphasized that social media plays a significant role in creating these echo chambers. The resulting increased isolation and misunderstandings in social contexts can be profound. Similarly, a report by the Guardian highlighted how algorithms on platforms like Facebook and Twitter exacerbate these echo chambers, intensifying political and social divides. This limited exposure to diverse perspectives leads to a narrower and more biased worldview.
Follow the link for more ways in which “social” media isn’t and for Koehler’s suggestions as to how to ameliorate its antisocial effects.
No Place To Hide 0
The EFF explores how some car manufacturers’ are tracking your movements and selling their findings. Needless to say, the EFF thinks oversight is required. Here’s a bit of the article.
Car manufacturers including General Motors, Kia, Subaru, and Mitsubishi have some form of services or apps that collect, maintain, and distribute your connected car data to insurance companies. Insurance companies spend thousands of dollars purchasing your car data to factor in these “select insights” about your driving behavior. Those insights are then factored into your “risk score,” which can potentially spike your insurance premiums.
Afterthought:
It’s ironic, is it not?
Many persons sweat bullets about government surveillance, which has rules and regulations (and is nowhere nearly so extensive as some would have us believe), then run nekkid through industrial for profit tracking of their day-to-day activities.
Artificial? Yes. Intelligent? Not So Much. 0
Indiana University law professor Michael Mattioli, reacting to the recent kerfuffle of OpenAI’s attempt to steal mimic Scarlett Johansson’s voice, raises an interesting question:
- Why are Silicon Valley Tech Bros expending so much energy trying to create AI bots that sound human, when other more efficient ways of interacting with computers have worked very nicely for decades?
Here’s a tiny bit of his answer (emphasis added):
There’s also an echo of the ancient quest to commune with eternity, to grasp immortality, woven into AI chatbots like Sky. The pyramids served as eternal vessels for a pharaoh’s spirit; what is lifelike AI if not an attempt to capture and channel a human being’s essential nature?
(Or could it be that they just want to make their fantasies of being Captain Kirk sitting the captain’s chair saying, “Computer . . . .” come to life?)
Aside:
Speaking of AI, security maven Bruce Schneier thinks that AI will make phishing attempts even less fishy and even harder to detect.