Running Naked through the Internet category archive
The EFF reports on a victory for privacy. A snippet:
Phone app location data brokers are a growing menace to our privacy and safety. All you did was click a box while downloading an app. Now the app tracks your every move and sends it to a broker, which then sells your location data to the highest bidder, from advertisers to police.
So it is welcome news that the Federal Trade Commission has brought a successful enforcement action against X-Mode Social (and its successor Outlogic).
The FTC’s complaint illustrates the dangers created by this industry. The company collects our location data through software development kits (SDKs) incorporated into third-party apps, through the company’s own apps, and through buying data from other brokers. The complaint alleged that the company then sells this raw location data, which can easily be correlated to specific individuals.
More at the link.
I find it ironic that persons sweat bullets about limited and regulated “government surveillance” while willingly and heedlessly running nekkid before corporate collectors of confidentia–oh, never mind.
Artificial “Intelligence” seems to be the hot new toy on the inner webs. A lot of persons are jumping in and playing with them just because they can, without considering the implications of running nekkid through the internet.
At Above the Law, Ayesha Haq, who specializes in legal issues regarding privacy and the internet, is not sanguine about the implications of ChatGPT and similar–er–tools as regards users’ personal information. I commend her piece to your attention; here’s a tiny bit of it:
1. It does not state a legal basis for processing the personal information it receives.
2. Users are not given a mechanism to exercise their “right to be forgotten” or “right to amend” personal information.
3. Personal information is stored indefinitely with no insight on how that data is secured and protected.
4. ChatGPT gathers information from unknown sources on the internet. If a user has any digital footprint, chances are ChatGPT knows a great deal about that user depending on what is available on the internet. This knowledge may be false, and the user has no recourse to correct, amend, or even delete the false information.
In episode 438 of the Going Linux podcast, Larry and Bill parse the Windows 10/11 EULA (End User Licensing Agreement) so you don’t have to.
There’s a reason these documents are generally written at about the 17th grade level.
They want you not to read them.
Bruce Schneir thinks that the efforts to ban TikTok, which seems to be the new “in” thing in the West, miss the point. A snippet:
If we want to address the real problem, we need to enact serious privacy laws, not security theater, to stop our data from being collected, analyzed, and sold—by anyone. Such laws would protect us in the long term, and not just from the app of the week. They would also prevent data breaches and ransomware attacks from spilling our data out into the digital underworld, including hacker message boards and chat servers, hostile state actors, and outside hacker groups. And, most importantly, they would be compatible with our bedrock values of free speech and commerce, which Congress’s current strategies are not.
The entire article is worth a read.
The EFF reports on a California court’s decision to disallow evidence from a “geofence” warrant.* The report also discusses Google’s procedure for responding to such warrants and notes that this is one of several rulings questioning such warrants.
Here’s one bit from the article; follow the link to read the rest.
Me, I keep “location services” turned off on my phone unless I have a positive need, which is almost never, because I know how to read a map. Remember maps?
*Briefly, a geofence warrant is issued to the corporate surveillance state–all those companies that track the location of our phones or other devices so they can “improve your online experience”–to find out who was in the vicinity of a crime. The police then go through the list to pick out and pursue possible suspects.
Some good news from Bruce Schneir.
Personally, I keep the GPS (Google calls it “location services” turned off on my Android devices unless I have a positive need for it, which is almost never. That means trackers can know my general location, sure, but they don’t know whether I’m in the drug store or the hardware store.
Big Brother is here, but he’s not who persons expected him to be.
The EFF looks at the roundly debunked movie, 2000 Mules, and points out that, in addition to its outright lies and–er–dubious conclusions, the film highlights the invasive nature of our private enterprise surveillance society. Here’s a bit from the EFF’s article; follow the link for much more.
Putting aside the logical flaws of TTV’s (True the Vote, the organization behind the movie–ed.) voter fraud claims, the very fact that they were able to buy this much personal location data on hundreds of thousands of people’s lives, over a span of many months leading to election day, is appalling. But this is the data broker business model working as intended: by vacuuming up geolocation data from thousands of smartphone apps, data brokers package and sell huge quantities of highly revealing location data to anyone willing to buy it. And TTV is hardly the only customer: the U.S. military, federal agencies, and federal law enforcement are all customers to geolocation data brokers. Recently, one data broker was even found selling the location data of people seeking reproductive healthcare, which soon could provide states with draconian anti-abortion legislation new digital evidence to identify and prosecute people who seek or provide abortion.
And the irony! Even as persons were fretting about the “surveillance state,” those same persons failed to notice that private enterprise was assembling a corporate surveillance monster beyond anything George Orwell ever imagined. Heck, they turned a blind eye to it even as they happily agreed to those unread internet “terms of service” agreements that made it possible.
Mangy comments at the Youtube page:
Kevin McCarthy said Trump was responsible for the January 6th insurrection, then he said Trump was in no way responsible for the insurrection, then he said there was no insurrection, then he said he was misquoted when he actually said Trump was responsible for a mid-course correction, then he said Donald was responsible for a major erection. Later, Kevin claimed he said none of those things, but he loved Trump like a brother and would even love him like Stormy Daniels if it meant he’d be Speaker of the House some day.
Kevin clearly is an opportunistic eel with no guiding principles or moral compass. Mangy thought Kevin needed a song to sing, since making statements that are self-contradictory is a bad look for him. By singing about his relationship with Trump, Kevin will engage a wider audience and prepare for a time when he is dumped from the U.S. House of Representatives and hoping for a big break on the has-been-celebrity version of America’s Got Limited Talent.
This is a must-listen if you use the internet (Oh! You’re here already!). If you can’t listen to it now, bookmark it and come back, or watch it at Chron.com.
Via Chron.com, which has more.
I normally configure my browsers to “delete all cookies” on exit and, if the setting is available, “reject third party cookies. And I won’t use Google Chrome on a bet.
And, on those rare times I visit the Zuckerborg, I do so only in a private window.
Bruce Schneier reports:
He goes on to opine that Apple doesn’t seem to have thought this whole Air Tag through.
Follow the link for more.
The Las Vegas Sun editorial board considers a speech at a recent right-wing gathering and concludes:
Follow the link for their reasoning.
One more time, “social” media isn’t and the internet is a public place.
And no one’s watching the watchers, not even the persons paid to watch the watchers.
The question is, “Can you keep it secret?”
Frances Coleman points out that, at least as regards “social” media, your privacy is indeed in jeopardy.
David Neiwert explains. A snippet:
A recent study demonstrates that YouTube’s recommendations—which send users to videos the algorithm believes the viewer will like—are in fact promoting videos that violates (sic) the company’s content policies, including hate speech and disinformation. In many cases, the platform is recommending content that has little or no relation to the video that was watched previously. And the company has made clear it has no intention of changing things.
Follow the link for the full story.