Australia

Australian Federal Court Rules Apple and Google Engaged in Anti-Competitive App Store Conduct (abc.net.au) 16

Australia's Federal Court ruled Tuesday that Apple and Google violated competition law through anti-competitive app store practices. Judge Jonathan Beach found both companies breached section 46 of the Competition and Consumer Act by misusing market power to reduce competition.

The decision covers class actions representing 15 million consumers and 150,000 developers seeking compensation for inflated prices from 2017-2022, plus separate Epic Games cases. Apple's exclusive iOS App Store and mandatory payment system, along with Google's Play Store billing requirements, were ruled anti-competitive despite security justifications. Compensation amounts will be determined at subsequent hearings, with estimates reaching hundreds of millions of dollars.
Wikipedia

Wikipedia Operator Loses Court Challenge To UK Online Safety Act Regulations (reuters.com) 54

The operator of Wikipedia on Monday lost a legal challenge to parts of Britain's Online Safety Act, which sets tough new requirements for online platforms and has been criticized for potentially curtailing free speech. From a report: The Wikimedia Foundation took legal action at London's High Court over regulations made under the law, which it said could impose the most stringent category of duties on Wikipedia.

The foundation said if it was subject to so-called Category 1 duties -- which would require Wikipedia's users and contributors' identities to be verified -- it would need to drastically reduce the number of British users who can access the site. Judge Jeremy Johnson dismissed its case on Monday, but said the Wikimedia Foundation could bring a further challenge if regulator Ofcom "(impermissibly) concludes that Wikipedia is a Category 1 service".

Crime

It's Steve Wozniak's 75th Birthday. Whatever Happened to His YouTube Lawsuit? (cbsnews.com) 98

In 2020 a YouTube video used video footage of Steve Wozniak in a scam to steal bitcoin. "Some people said they lost their life savings," Wozniak tells CBS News, explaining why he sued YouTube in 2020 — and where his case stands now: Wozniak's lawsuit against YouTube has been tied up in court now for five years, stalled by federal legislation known as Section 230. Attorney Brian Danitz said, "Section 230 is a very broad statute that limits, if not totally, the ability to bring any kind of case against these social media platforms."

"It says that anything gets posted, they have no liability at all," said Wozniak. "It's totally absolute."

Google responded to our inquiry about Wozniak's lawsuit with a statement from José Castañeda, of Google Policy Communications: "We take abuse of our platform seriously and take action quickly when we detect violations ... we have tools for users to report channels that are impersonating their likeness or business." [Steve's wife] Janet Wozniak, however, says YouTube did nothing, even though she reported the scam video multiple times: "You know, 'Please take this down. This is an obvious mistake. This is fraud. You're YouTube, you're helping dupe people out of their money,'" she said.

"They wouldn't," said Steve...

Today is Steve Wozniak's 75th birthday. (You can watch the interview here.) And the article includes this interesting detail about Woz's life today: Wozniak sold most of his Apple stock in the mid-1980s when he left the company. Today, though, he still gets a small paycheck from Apple for making speeches and representing the company. He says he's proud to see Apple become a trillion-dollar company. "Apple is still the best," he said. "And when Apple does things I don't like, and some of the closeness I wish it were more open, I'll speak out about it. Nobody buys my voice!"

I asked, "Apple listen to you when you speak out?"

"No," Wozniak smiled. "Oh, no. Oh, no."

Wozniak answered questions from Slashdot readers in 2000 and again in 2012.

And he dropped by Slashdot on his birthday to leave this comment for Slashdot's readers...
Microsoft

Microsoft Sued Over Plans to Discontinue Windows 10 Support (courthousenews.com) 276

xA California man sued Microsoft Thursday over its plan to stop supporting Windows 10 on October 14th, reports Courthouse News Though Windows 11 was launched nearly four years ago, many of its billion or so worldwide users are clinging to the decade-old Windows 10... According to StatCounter, nearly 43% of Windows users still use the old version on their desktop computers....

"With only three months until support ends for Windows 10, it is likely that many millions of users will not buy new devices or pay for extended support," Klein writes in his complaint. "These users — some of whom are businesses storing sensitive consumer data — will be at a heightened risk of a cyberattack or other data security incident, a reality of which Microsoft is well aware...." According to one market analyst writing in 2023, Microsoft's shift away from Windows 10 will lead millions of customers to buy new devices and thrown out their old ones, consigning as many as 240 million PCs to the landfill....

Klein is asking a judge to order Microsoft to continue supporting Windows 10 without additional charge, until the number of devices running the older operating system falls bellow 10% of total Windows users. He says nothing about any money he seeking for himself, though it does ask for attorneys' fees.

Microsoft did not respond to an email requesting a comment.

The complaint also requests an order requiring Microsoft's advertising "to disclose clearly and prominently the approximate end-of-support date for the Windows operating system purchased with the device at the time of purchase" or at least "disclose that support is only guaranteed for a certain delineated period of time without additional cost, and to disclose the potential consequences of such end-of-support for device security and functionality."
AI

Students Have Been Called to the Office - Or Arrested - for False Alarms from AI-Powered Surveillance Systems (apnews.com) 162

In 2023 a 13-year-old girl "made an offensive joke while chatting online with her classmates," reports the Associated Press.

But when the school's surveillance software spotted that joke, "Before the morning was even over, the Tennessee eighth grader was under arrest. She was interrogated, strip-searched and spent the night in a jail cell, her mother says." Her parents filed a lawsuit against the school system, according to the article (which points out the girl wasn't allowed to talk to her parents until the next day). "A court ordered eight weeks of house arrest, a psychological evaluation and 20 days at an alternative school for the girl." Gaggle's CEO, Jeff Patterson, said in an interview that the school system did not use Gaggle the way it is intended. The purpose is to find early warning signs and intervene before problems escalate to law enforcement, he said. "I wish that was treated as a teachable moment, not a law enforcement moment," said Patterson.
But that's just one example, the article points out. "Surveillance systems in American schools increasingly monitor everything students write on school accounts and devices." Thousands of school districts across the country use software like Gaggle and Lightspeed Alert to track kids' online activities, looking for signs they might hurt themselves or others. With the help of artificial intelligence, technology can dip into online conversations and immediately notify both school officials and law enforcement... In a country weary of school shootings, several states have taken a harder line on threats to schools. Among them is Tennessee, which passed a 2023 zero-tolerance law requiring any threat of mass violence against a school to be reported immediately to law enforcement....

Students who think they are chatting privately among friends often do not realize they are under constant surveillance, said Shahar Pasch, an education lawyer in Florida. One teenage girl she represented made a joke about school shootings on a private Snapchat story. Snapchat's automated detection software picked up the comment, the company alerted the FBI, and the girl was arrested on school grounds within hours... The technology can also involve law enforcement in responses to mental health crises. In Florida's Polk County Schools, a district of more than 100,000 students, the school safety program received nearly 500 Gaggle alerts over four years, officers said in public Board of Education meetings. This led to 72 involuntary hospitalization cases under the Baker Act, a state law that allows authorities to require mental health evaluations for people against their will if they pose a risk to themselves or others...

Information that could allow schools to assess the software's effectiveness, such as the rate of false alerts, is closely held by technology companies and unavailable publicly unless schools track the data themselves. Students in one photography class were called to the principal's office over concerns Gaggle had detected nudity. The photos had been automatically deleted from the students' Google Drives, but students who had backups of the flagged images on their own devices showed it was a false alarm. District officials said they later adjusted the software's settings to reduce false alerts. Natasha Torkzaban, who graduated in 2024, said she was flagged for editing a friend's college essay because it had the words "mental health...."

School officials have said they take concerns about Gaggle seriously, but also say the technology has detected dozens of imminent threats of suicide or violence. "Sometimes you have to look at the trade for the greater good," said Board of Education member Anne Costello in a July 2024 board meeting.

The Internet

Net Neutrality Advocates Won't Appeal Loss (arstechnica.com) 96

Advocacy groups have decided not to appeal a federal court ruling striking down Biden-era net neutrality rules, citing the FCC's current Republican majority and a Supreme Court they view as hostile to the issue. Instead, they plan to push for open internet protections through Congress, state laws, and future court cases, while noting California's net neutrality law remains in effect. Ars Technica reports: "Trump's election flipped the FCC majority back to ideologues who've always taken the broadband industry's side on this crucial issue. And the justices making up the current Supreme Court majority have shown hostility toward sound legal reasoning on this precise question and a host of other topics too," said Matt Wood, VP of policy and general counsel at Free Press. [...] "The 6th Circuit's decision earlier this year was spectacularly wrong, and the protections it struck down are extremely important. But rather than attempting to overcome an agency that changed hands -- and a Supreme Court majority that cares very little about the rule of law -- we'll keep fighting for Internet affordability and openness in Congress, state legislatures and other court proceedings nationwide," Wood said.

Besides Free Press, groups announcing that they won't appeal are the Benton Institute for Broadband & Society, New America's Open Technology Institute, and Public Knowledge. "Though the 6th Circuit erred egregiously in its decision to overturn the FCC's 2024 Open Internet order, there are other ways we can advance our fight for consumer protections and ISP accountability than petitioning the Supreme Court to review this case -- and, given the current legal landscape, we believe our efforts will be more effective if focused on those alternatives," said Raza Panjwani, senior policy counsel at the Open Technology Institute. Net neutrality could still reach the Supreme Court in another case. Andrew Jay Schwartzman, senior counselor of the Benton Institute for Broadband & Society, said that "the 6th Circuit decision makes bad policy as well as bad law. Because it is at odds with the holdings of two other circuits, we expect to take the issue to the Supreme Court in a future case."

Bug

UK Courts Service 'Covered Up' IT Bug That Lost Evidence (bbc.co.uk) 20

Bruce66423 shares a report from the BBC: The body running courts in England and Wales has been accused of a cover-up, after a leaked report found it took several years to react to an IT bug that caused evidence to go missing, be overwritten or appear lost. Sources within HM Courts & Tribunals Service (HMCTS) say that as a result, judges in civil, family and tribunal courts will have made rulings on cases when evidence was incomplete. The internal report, leaked to the BBC, said HMCTS did not know the full extent of the data corruption, including whether or how it had impacted cases, as it had not undertaken a comprehensive investigation. It also found judges and lawyers had not been informed, as HMCTS management decided it would be "more likely to cause more harm than good." HMCTS says its internal investigation found no evidence that "any case outcomes were affected as a result of these technical issues." However, the former head of the High Court's family division, Sir James Munby, told the BBC the situation was "shocking" and "a scandal." Bruce66423 comments: "Given the relative absence of such stories from the USA, should I congratulate you for better-quality software or for being better at covering up disasters?"
The Courts

AI Industry Horrified To Face Largest Copyright Class Action Ever Certified (arstechnica.com) 188

An anonymous reader quotes a report from Ars Technica: AI industry groups are urging an appeals court to block what they say is the largest copyright class action ever certified. They've warned that a single lawsuit raised by three authors over Anthropic's AI training now threatens to "financially ruin" the entire AI industry if up to 7 million claimants end up joining the litigation and forcing a settlement. Last week, Anthropic petitioned (PDF) to appeal the class certification, urging the court to weigh questions that the district court judge, William Alsup, seemingly did not. Alsup allegedly failed to conduct a "rigorous analysis" of the potential class and instead based his judgment on his "50 years" of experience, Anthropic said.

If the appeals court denies the petition, Anthropic argued, the emerging company may be doomed. As Anthropic argued, it now "faces hundreds of billions of dollars in potential damages liability at trial in four months" based on a class certification rushed at "warp speed" that involves "up to seven million potential claimants, whose works span a century of publishing history," each possibly triggering a $150,000 fine. Confronted with such extreme potential damages, Anthropic may lose its rights to raise valid defenses of its AI training, deciding it would be more prudent to settle, the company argued. And that could set an alarming precedent, considering all the other lawsuits generative AI (GenAI) companies face over training on copyrighted materials, Anthropic argued. "One district court's errors should not be allowed to decide the fate of a transformational GenAI company like Anthropic or so heavily influence the future of the GenAI industry generally," Anthropic wrote. "This Court can and should intervene now."

In a court filing Thursday, the Consumer Technology Association and the Computer and Communications Industry Association backed Anthropic, warning the appeals court that "the district court's erroneous class certification" would threaten "immense harm not only to a single AI company, but to the entire fledgling AI industry and to America's global technological competitiveness." According to the groups, allowing copyright class actions in AI training cases will result in a future where copyright questions remain unresolved and the risk of "emboldened" claimants forcing enormous settlements will chill investments in AI. "Such potential liability in this case exerts incredibly coercive settlement pressure for Anthropic," industry groups argued, concluding that "as generative AI begins to shape the trajectory of the global economy, the technology industry cannot withstand such devastating litigation. The United States currently may be the global leader in AI development, but that could change if litigation stymies investment by imposing excessive damages on AI companies."

Encryption

Encryption Made For Police and Military Radios May Be Easily Cracked (wired.com) 64

An anonymous reader quotes a report from Wired: Two years ago, researchers in the Netherlands discovered an intentional backdoor in an encryption algorithm baked into radios used by critical infrastructure -- as well as police, intelligence agencies, and military forces around the world -- that made any communication secured with the algorithm vulnerable to eavesdropping. When the researchers publicly disclosed the issue in 2023, the European Telecommunications Standards Institute (ETSI), which developed the algorithm, advised anyone using it for sensitive communication to deploy an end-to-end encryption solution on top of the flawed algorithm to bolster the security of their communications. But now the same researchers have found that at least one implementation of the end-to-end encryption solution endorsed by ETSI has a similar issue that makes it equally vulnerable to eavesdropping. The encryption algorithm used for the device they examined starts with a 128-bit key, but this gets compressed to 56 bits before it encrypts traffic, making it easier to crack. It's not clear who is using this implementation of the end-to-end encryption algorithm, nor if anyone using devices with the end-to-end encryption is aware of the security vulnerability in them. Wired notes that the end-to-end encryption the researchers examined is most commonly used by law enforcement and national security teams. "But ETSI's endorsement of the algorithm two years ago to mitigate flaws found in its lower-level encryption algorithm suggests it may be used more widely now than at the time."
Privacy

'Facial Recognition Tech Mistook Me For Wanted Man' (bbc.co.uk) 112

Bruce66423 shares a report from the BBC: A man who is bringing a High Court challenge against the Metropolitan Police after live facial recognition technology wrongly identified him as a suspect has described it as "stop and search on steroids." Shaun Thompson, 39, was stopped by police in February last year outside London Bridge Tube station. Privacy campaign group Big Brother Watch said the judicial review, due to be heard in January, was the first legal case of its kind against the "intrusive technology." The Met, which announced last week that it would double its live facial recognition technology (LFR) deployments, said it was removing hundreds of dangerous offenders and remained confident its use is lawful. LFR maps a person's unique facial features, and matches them against faces on watch-lists. [...]

Mr Thompson said his experience of being stopped had been "intimidating" and "aggressive." "Every time I come past London Bridge, I think about that moment. Every single time." He described how he had been returning home from a shift in Croydon, south London, with the community group Street Fathers, which aims to protect young people from knife crime. As he passed a white van, he said police approached him and told him he was a wanted man. "When I asked what I was wanted for, they said, 'that's what we're here to find out'." He said officers asked him for his fingerprints, but he refused, and he was let go only after about 30 minutes, after showing them a photo of his passport.

Mr Thompson says he is bringing the legal challenge because he is worried about the impact LFR could have on others, particularly if young people are misidentified. "I want structural change. This is not the way forward. This is like living in Minority Report," he said, referring to the science fiction film where technology is used to predict crimes before they're committed. "This is not the life I know. It's stop and search on steroids. "I can only imagine the kind of damage it could do to other people if it's making mistakes with me, someone who's doing work with the community."
Bruce66423 comments: "I suspect a payout of 10,000 pounds for each false match that is acted on would probably encourage more careful use, perhaps with a second payout of 100,000 pounds if the same person is victimized again."
The Courts

Country's Strictest Ban On Election Deepfakes Struck By Judge (politico.com) 26

A federal judge struck down California's strict anti-deepfake election law, citing Section 230 protections rather than First Amendment concerns. Politico reports: [Judge John Mendez] also said he intended to overrule a second law, which would require labels on digitally altered campaign materials and ads, for violating the First Amendment. [...] The first law would have blocked online platforms from hosting deceptive, AI-generated content related to an election in the run-up to the vote. It came amid heightened concerns about the rapid advancement and accessibility of artificial intelligence, allowing everyday users to quickly create more realistic images and videos, and the potential political impacts. But opponents of the measures ... also argued the restrictions could infringe upon freedom of expression.

The original challenge was filed by the creator of the video, Christopher Kohls, on First Amendment grounds, with X later joining the case after [Elon Musk] said the measures were "designed to make computer-generated parody illegal." The satirical right-wing news website the Babylon Bee and conservative social media site Rumble also joined the suit. Mendez said the first law, penned by Democratic state Assemblymember Marc Berman, conflicted with the oft-cited Section 230 of the federal Communications Decency Act, which shields online platforms from liability for what third parties post on their sites. "They don't have anything to do with these videos that the state is objecting to," Mendez said of sites like X that host deepfakes.

But the judge did not address the First Amendment claims made by Kohls, saying it was not necessary in order to strike down the law on Section 230 grounds. "I'm simply not reaching that issue," Mendez told the plaintiffs' attorneys. [...] "I think the statute just fails miserably in accomplishing what it would like to do," Mendez said, adding he would write an official opinion on that law in the coming weeks. Laws restricting speech have to pass a strict test, including whether there are less restrictive ways of accomplishing the state's goals. Mendez questioned whether approaches that were less likely to chill free speech would be better. "It's become a censorship law and there is no way that is going to survive," Mendez added.

The Courts

Tornado Cash Co-Founder Storm Guilty in Crypto Mixing Case 8

A Manhattan jury convicted Tornado Cash co-founder Roman Storm on Wednesday of conspiring to operate an unlicensed money-transfer business, though jurors deadlocked on charges of money laundering conspiracy and sanctions violations after three days of deliberation.

Federal prosecutors alleged Storm helped cybercriminals launder more than $1 billion through the cryptocurrency mixing platform, which launched in 2019 as a decentralized protocol designed to obscure transaction origins by pooling and redistributing funds through smart contracts.
The Courts

OpenAI Offers 20 Million User Chats In ChatGPT Lawsuit. NYT Wants 120 Million. (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: OpenAI is preparing to raise what could be its final defense to stop The New York Times from digging through a spectacularly broad range of ChatGPT logs to hunt for any copyright-infringing outputs that could become the most damning evidence in the hotly watched case. In a joint letter (PDF) Thursday, both sides requested to hold a confidential settlement conference on August 7. Ars confirmed with the NYT's legal team that the conference is not about settling the case but instead was scheduled to settle one of the most disputed aspects of the case: news plaintiffs searching through millions of ChatGPT logs. That means it's possible that this week, ChatGPT users will have a much clearer understanding of whether their private chats might be accessed in the lawsuit. In the meantime, OpenAI has broken down (PDF) the "highly complex" process required to make deleted chats searchable in order to block the NYT's request for broader access.

Previously, OpenAI had vowed to stop what it deemed was the NYT's attempt to conduct "mass surveillance" of ChatGPT users. But ultimately, OpenAI lost its fight to keep news plaintiffs away from all ChatGPT logs. After that loss, OpenAI appears to have pivoted and is now doing everything in its power to limit the number of logs accessed in the case -- short of settling -- as its customers fretted over serious privacy concerns. For the most vulnerable users, the lawsuit threatened to expose ChatGPT outputs from sensitive chats that OpenAI had previously promised would be deleted. Most recently, OpenAI floated a compromise, asking the court to agree that news organizations didn't need to search all ChatGPT logs. The AI company cited the "only expert" who has so far weighed in on what could be a statistically relevant, appropriate sample size -- computer science researcher Taylor Berg-Kirkpatrick. He suggested that a sample of 20 million logs would be sufficient to determine how frequently ChatGPT users may be using the chatbot to regurgitate articles and circumvent news sites' paywalls. But the NYT and other news organizations rejected the compromise, OpenAI said in a filing (PDF) yesterday. Instead, news plaintiffs have made what OpenAI said was an "extraordinary request that OpenAI produce the individual log files of 120 million ChatGPT consumer conversations."

That's six times more data than Berg-Kirkpatrick recommended, OpenAI argued. Complying with the request threatens to "increase the scope of user privacy concerns" by delaying the outcome of the case "by months," OpenAI argued. If the request is granted, it would likely trouble many users by extending the amount of time that users' deleted chats will be stored and potentially making them vulnerable to a breach or leak. As negotiations potentially end this week, OpenAI's co-defendant, Microsoft, has picked its own fight with the NYT over its internal ChatGPT equivalent tool that could potentially push the NYT to settle the disputes over ChatGPT logs.

The Courts

Rivian Sues To Sell Its EVs Directly In Ohio (techcrunch.com) 74

Rivian has filed a federal lawsuit in Ohio to challenge a state law preventing it from selling electric vehicles directly to consumers, arguing the rule is anti-competitive and outdated. The law currently protects legacy dealerships while allowing Tesla a special carve-out, and Rivian wants similar rights to apply for a direct-sales license in the state. TechCrunch reports: "Ohio's prohibition of Rivian's direct-sales-only business model is irrational in the extreme: it reduces competition, decreases consumer choice, and drives up consumer costs and inconvenience -- all of which harm consumers -- with literally no countervailing benefit," lawyers for the company wrote in the complaint. Rivian is asking the court to allow the company to apply for a dealership license so it can sell vehicles directly. Ohio customers have to buy from Rivian vehicles from locations in other states where direct sales are allowed. The cars are then shipped to Rivian service centers within Ohio.

Allowing Rivian to sell directly would not be treading new legal ground, the company argues in its complaint. Tesla has had a license to sell in Ohio since 2013 and can sell directly to consumers. What's stopping Rivian is a 2014 law passed by the state's legislature. That law, which Rivian says came after an intense lobbying effort by the Ohio Automobile Dealers Association (OADA), effectively gave Tesla a carve-out and blocked any future manufacturers from acquiring the necessary dealership licenses.
"Consumer choice is a bedrock principle of America's economy. Ohio's archaic prohibition against the direct-sales of vehicles is unconstitutional, irrational, and harms Ohioans by reducing competition and choice and driving up costs and inconvenience," Mike Callahan, Rivian's chief administrative officer, said in a statement.
Google

Google Has Just Two Weeks To Begin Cracking Open Android, It Admits in Emergency Filing 14

An anonymous reader shares a report: Yesterday, when Epic won its Google antitrust lawsuit for a second time, it wasn't quite clear how soon Google would need to start dismantling its affirmed illegal monopoly.

Today, Google admits the answer is: 14 days. Google has just 14 days to enact major changes to its Google Play app store, and the way it does business with phonemakers, cellular carriers, and app developers, unless it wins an emergency stay (pause) from the Ninth Circuit Court of Appeals as it continues to appeal. It must stop forcing apps to use Google Play Billing, allow app developers to freely steer their users to other platforms, and limit the perks it can offer in exchange for preinstalled apps, among other changes.
IT

Belgium Bans Internet Archive's 'Open Library' (torrentfreak.com) 34

A Brussels court has issued an unusually broad site-blocking order targeting Internet Archive's Open Library alongside shadow libraries including Anna's Archive, Libgen, and Z-Library. The order, requested by publishing and author organizations, directs an unprecedented range of intermediaries to take action beyond traditional ISP blocks.

Search engines, DNS resolvers, advertisers, domain name services, CDNs, hosting companies, and payment processors -- including Google, Microsoft, Cloudflare, Amazon Web Services, PayPal, and Starlink -- must restrict access to the targeted sites. The court found "clear and significant infringement" in the ex parte proceeding.
United Kingdom

UK Supreme Court Gives Banks Partial Win on Car Finance Commissions (ft.com) 6

Financial Times: The UK's highest court has partially overturned a landmark motor finance judgment that threatened to leave banks on the hook for tens of billions of pounds in compensation for allegedly deceiving consumers with hidden commissions on car loans.

The Supreme Court's decision has been keenly awaited by investors as well as millions of consumers who were poised to claim redress from the banks. The government has been considering legislation to limit the fallout. The controversy over car finance shot to prominence after a bombshell Court of Appeal judgment in October that awarded compensation to three people who claimed they were misled by banks concealing the payment of commissions to dealerships.
The $58.3 billion car finance scandal centers on hidden commissions paid by lenders to car dealers who arranged loans without disclosing the payment amounts and terms to borrowers. Under discretionary commission arrangements, dealers received larger payments when they persuaded car buyers to accept higher interest rates on loans. The practice affected roughly 90% of new car purchases and many secondhand vehicles, potentially exposing millions of motorists to mis-selling.
Government

US Senators Introduce New Pirate Site Blocking Bill: Block BEARD (torrentfreak.com) 54

An anonymous reader quotes a report from TorrentFreak: Efforts to introduce pirate site blocking to the United States continue with the introduction of the "Block BEARD" bill (PDF) in the Senate. The bipartisan proposal, backed by Senators Tillis, Coons, Blackburn, and Schiff, aims to create a new legal mechanism to combat foreign piracy websites. Block BEARD is similar to the previously introduced House bill "FADPA", but doesn't directly mention DNS resolvers. [...] The site-blocking proposal seeks to amend U.S. copyright law, enabling rightsholders to request federal courts to designate online locations as a "foreign digital piracy site". If that succeeds, courts can subsequently order U.S. service providers to block access to these sites.

Pirate site designation would be dependent on rightsholders showing that they are harmed by a site's activities, that reasonable efforts had been made to notify the site's operator, and that a reasonable investigation confirms the operator is not located within the United States. Additionally, rightsholders must show that the site is primarily designed for piracy, has limited commercial purpose, or is intentionally marketed by its operator to promote copyright-infringing activities. If the court classifies a website as a foreign pirate site, rightsholders can go back to court to request a blocking order. At this stage, the court will determine whether it is technically and practically feasible for ISPs to block the site, and consider any potential harm to the public interest. The granted orders would stay in place for a year with the option to extend if necessary. If blocked sites switch to new locations, the court can also amend blocking orders to include new IP addresses and domain names.

The Block BEARD bill broadly applies to service providers as defined in section 512(k)(1)(A) of the DMCA. This is a broad definition that applies to residential ISPs, but also to search engines, social media platforms, and DNS resolvers. Service providers with fewer than 50,000 subscribers are explicitly excluded, and the same applies to venues such as coffee shops, libraries, and universities that offer internet access to visitors. Unlike the FADPA bill introduced by Representative Lofgren earlier this year, the Senate bill does not specifically mention DNS resolvers. Block BEARD does not mention VPNs, but its broad definition of "service provider" could be interpreted to include them. The proposal states that providers have the option to contest their inclusion in a blocking order. Once an order is issued, they would have the freedom to choose their own blocking techniques. There are no transparency requirements mentioned in the bill, so if and how the public is informed is unclear.

Google

Google Loses Epic Games Appeal, Must Open App Store To Rivals (reuters.com) 42

Google lost its appeal Thursday of a judge's order that will force the tech giant to open up its app store to competitors. The 9th Circuit Court of Appeals upheld a lower court ruling requiring Google Play to allow rival marketplaces and billing systems, ending a legal battle that began when Epic Games sued over anticompetitive practices.

A jury sided with Epic in December 2023, finding Google paid phone makers and app developers to use its store exclusively.
Sony

Sony Is Suing Tencent Over Shameless Horizon Knock-off Game (ign.com) 50

Sony has filed a lawsuit in California court against Tencent, alleging the Chinese company's upcoming game Light of Motiram constitutes a "slavish clone" of Sony's Horizon series.

The complaint details extensive similarities between the games, from post-apocalyptic robot dinosaur settings to red-haired female protagonists. Tencent had approached Sony for licensing deals in 2024, which Sony rejected twice.

Slashdot Top Deals