“Little brother”??!!
Unless mistranslated, it must be in reference to Orwell 1984 “Big Brother”.
Is Huawei planning to outcompete iOS and Android (and 5G?)on privacy and security on the international markets?
But there is no guarantee they mean it for real, and China governement will let them.
Until 2 months ago it was opening EU “security labs” and boasting about Huawei ABC Principles of Security: “A”, assume nothing. “B”, believe nobody. “C”, check everything. Yet, there are no independent certifications that they follow their own ABC principles, deep enough to matter, i.e. down to CPU design and fabrication oversigh.
If they are serious this time about ABC, that may be great for the rest of us.
Thirty years after the end of the Cold War, we’d be again the object of a “soft power” competition between 2 superpowers offering competing socio-economic systems, this time in cyberspace.
]]>“Self-driving cars are here”
https://medium.com/@andrewng/self-driving-cars-are-here-aea1752b1ad0
The same black box problem arises in the main “runtime environment” of ever more powerful AIs. We should try to replicate proven “architectures of individual and collective human wisdom”. Such architecture have allowed, in the many good case scenarios, to tackle and even enjoy runaway impredictability, while properly minimizing the suffering.
]]>If IT security boils down to that of the weakest “link”, then what is the sense arbitrarily limiting the scope of the annual EU ENISA Cyber Threat Reports by exclude some “links” such as the entire supply chain. Why is that?!
US Defense Science Board said already in 2005: “Trust cannot be added to integrated circuits after fabrication”.
In the Thematic Landscape Hardware document, vulnerabilities introduced during the fabrication and supply chain are out of scope. They say: “According to the scoping performed in Section 1.1, supply chain security aspects which are relevant for invasive/integrated modification of hardware was not in scope of the good practice research.”
]]>Absolute must read for those that think AI Superintelligence explosion may be just science fiction or decades away. Even the largest groups that have huge economic interests in downplaying the risks of AI, are now clearly spelling out the risks and the hinting in the right direction. Conforting.
]]>http://abcnews.go.com/International/head-nsas-elite-hacking-unit-hack/story?id=36573676
In assessing the veracity and completeness of what the head of NSA Tao says here we should consider it’s important part of its agency mission that hundreds of thousands of potential mid-to-high targets, legit or not, overestimate the manual (as opposed to semi automated) resources and efforts they need to devote per person to continuously compromise them. See NSA FoxAcid and Turbine programs.
So these targets will think NSA “endpoint target list” is small and does not include them, and therefore are fine with end-2-end encryption, and merely moderate or high assurance endpoints, like Tor/Tail, Signal on iOS, or an high end cryptophone.
]]>Aside from madness, what could the Paris Massacre terrorists, and those that support or strategize behind them, possibly have aimed to achieve?!
It can only be an increase of fear and hate among innocent civilian of 2 different religious faiths and cultures, that would lead to more war in Islamic states, and then to the coming to power of more fanatic irrational regimes that claim to represent true Islamic faith.
But more war in islamic states, with “collateral” massacres and injustices towards millions of islamic civilians, is unfortunately a goal that – for Cristian religious fanaticism and hate, political misjudgment or huge economic interests – has also been very actively promoted by some western private (oil and defense contractors) and governmental actors.
We can’t fight one without the other.
WELCOME TO LINEAR CITY 2.0
Fifteen years later – given all the advances in self-driving vehicles, and the fact that Linearcity that it will still take many years before they are authorized on the streets, and decades before they reach majority of cars – my Linearity concept could be amended by substituting all feeder systems to the main subway/train – which are in version 1.0 a mix of mixed-grade bus and automated guided buses (i.e. with driver!) – with pure self-driving small buses, but on a mix of separate-grade and mixed-grade. In some case, separate-grade may just be a preferential line well-marked on the asphalt, and sidewalk pedestrian warning, without physical separation.
PREAMBLE
It has fostered the development of a more open and free society.
This is very arguable. A large majority of digital rights activists and IT security and privacy experts would disagree that, overall, it has.
The European Union is currently the world region with the greatest constitutional protection of personal data, which is explicitly enshrined in Article 8 of the EU Charter of Fundamental Rights.
This is correct, although Switzerland may be better in some regards.Nevertheless, even such standards to date have not at all been able to stop widespread illegal and/or inconstitutional EU states bulk surveillance, until Snowden and Max Schrems came along. Furthermore, even if the US and EU states fully adhered to EU standards, it would significantly improve assurance for passive bulk surveillance, but it would do almost nothing for highly scalable targeted endpoint surveillance (NSA FoxAcid, Turbine, hacking Team, etc), against of tens and hundreds of thousands of high-value targets, such as activists, parliamentarians, reporters, etc.
Preserving these rights is crucial to ensuring the democratic functioning of institutions and avoiding the predominance of public and private powers that may lead to a society of surveillance, control and social selection.
“May” lead?! There is a ton of evidence available for the last 2 years that to a large extent we have been living for many years in a “society of surveillance, control and social selection.”
Internet … it is a vital tool for promoting individual and collective participation in democratic processes as well as substantive equality
Since it has emerged to be overwhelmingly a tool of undemocratic social control, it would be more correct to refer to its potential to “promoting individual and collective participation in democratic processes”, rather than a current actual fact.
The principles underpinning this Declaration also take account of the function of the Internet as an economic space that enables innovation, fair competition and growth in a democratic context.
By framing this at the end of the preamble, it makes it appear that privacy and civil rights needs are obstacles to innovation, fair competition and growth, which is not the case, as the Global Privacy as Innovation Network has been clearly arguing for over 2 years.
A Declaration of Internet Rights is crucial to laying the constitutional foundation for supranational principles and rights.
First, there have been about 80 Internet Bill of Rights approved by various stakeholders, including national legislative bodies. Second, a “declaration of rights” can very well be just smoke in the eyes, if those rights are not defined clearly enough and meaningful democratic enforcement is also enacted. There are really no steps towards proper “Supranational principles and rights”, and related enforcement mechanism, except a number of nations bindingly agreeing to them, similarly to the process that lead to creation of the International Criminal Court.
(1) Human exinction, if AI machines can be controlled at all. He said “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all”.
(2) Huge wealth [and power] gaps, if AI machine owners will allow a fair distribution once these will take on all human labor. He said “If machines produce everything we need, the outcome will depend on how things are distributed.” Hawking continued, “Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.”
In such excerpt, Richard Stallman said that computing trustworthiness is a “practical advantage or convenience” and not a requirement for computing freedom. I opposed to that a vision by which the lack of meaningful trustworthiness turns inevitably the other four software freedoms into a disutility to their users, and to people with whom they share code. I suggest that this realization should somehow be “codified” as a 5th freedom, or at least very widely acknowledged within the free software movement.
]]>Introduce contextual ads made exclusively of product/service comparisons made by democratically-controlled consumer organizations. In Italy for example there is Altroconsumo org with 100s of thousands of members which regularly produces extensive comparative reports.
In practice: for each new report that comes out, a request is made to the companies producing the product/service in the top 30% to sponsor it publishing inside Wikimedia portals.
Such formula could be extended to Wikimedia video, generating huge funds, arguably without any. Proceed are shared among Wikimedia and the consumer org.
(originally written in 2011, and sent to Jimmy Whale, which found it interesting)
]]>But maybe the best thing we can do to help reduce chances of the catastrophic risks of artificial super-intelligence explosion (and other existential risks) become a “Unabomber with flowers“.
By that I mean, we could hide out in the woods, as the Unabomber did, to live in modern off-grid eco-villages somewhere. But, instead of sending bombs to those most irresponsibly advancing general Artificial Intelligence, we’d send them flowers, letters and fresh produce, and invitations for a free travel in the woods.
Here’s what the wrote in the Unabomber wrote in his manifesto “Industrial Society and Its Future”, published by the New York Times in 1995:
173. If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decision for them, simply because machine-made decisions will bring better result than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.
My wife Vera and my dear friend Beniamino Minnella surely think so.
]]>]]>
Pushed around “presumably” with the key goal of giving Israeli intelligence full info on what other intelligence were up to. US made an illegal copy for itself and pushed that one around to other governments …
Here is the Wikipedia file a long detailed story of it, and Here excerpts from a relatively authoritative book on the history of Mossad “Gideon’s Spies” which I finished reading last Christmas:
https://en.wikipedia.org/wiki/Inslaw
http://cryptome.info/promis-mossad.htm
]]>Rabe argued that just as the United States and other Western countries routinely sell arms to allied countries like Saudi Arabia, so too should Hacking Team be able to sell its wares as well. After all, he pointed out, more than a dozen of the September 11 hijackers were from that country.
“Do you want Saudi Arabia to be able to track that sort of thing or would you rather have them be able to operate behind contemporary secrecy and the Internet?” he said.
“My point is not really to argue the various dangers of different kinds of equipment but just to say that if you’re going to sell weaponry to a country, it’s a little disingenuous to say that a crime-fighting tool is off-limits.”
Rabe ended the call with a forceful defense of the company’s entire business model, saying that there should be a controlled, appropriate way for governments and law enforcement to breach digital security.
“[CEO David Vincenzetti] started life in what we would call defensive security, to keep people out, and then he realized as more and more of the communications became inaccessible, that there was a need for a tool that gave investigators the opportunity to do surveillance. I don’t think that’s really that hard to understand, frankly. I don’t think any of us are against cryptography, but what we’re against is police being able to catch criminals and prevent crime, that’s what we’re worried about.”
]]>The Panopticon is a type of institutional building designed by the English philosopher and social theorist Jeremy Bentham in the late 18th century. The concept of the design is to allow a single watchman to observe (-opticon) all (pan-) inmates of an institution without the inmates being able to tell whether or not they are being watched. Although it is physically impossible for the single watchman to observe all cells at once, the fact that the inmates cannot know when they are being watched means that all inmates must act as though they are watched at all times, effectively controlling their own behaviour constantly
Eve if we succeeded, such devices may not serve their purpose or achieve wide adoption, if the average citizen will be constantly and increasingly surrounded by Net connected devices with a mic (mobile, Tv, Pc, Internet of Things), which may allow extremely low cost and scalable continuous surveillance. Schneier just made a fantastic analysis of the issue.
In fact, it would be inconvenient enough to have to place your ordinary phone in a purse, or under a thick pillow, before making a call with your (ultra-) private device, but it would be unbearable to most to have go in the garden because their TV or my fridge may be listening.
It is crucial, therefore, to press for national laws forbidding the sales of any Internet-connectible devices without a certified physical switch-off for mic, camera and power.
If one doesn’t come soon, we may be lead to a point where we might be better quitting on privacy altogether, and turn our efforts assessing the technical and political feasibility of making total surveillance as symmetrical as possible versus the powerful, somewhat in the vision of the Transparent Society paradigm of David Brin.
It is a major change in the existential nature of human life, but a large and increasing number of people (such as me) are already living in such world, with constant awareness that any word I say near my mobile (i. e. always) or I type in an electronic device may very well be collected and archived, at extreme low cost, and accessible to who knows how many.
It’s bearable.
What I can’t bear is that a small group of powerful or rich people, state and non-state related, can increasingly enjoy ultra-privacy and/or huge access to the information of others. This creates a huge shift of unaccountable power towards them, with very dire consequences for human race prospects of survival, and avoidance of durable forms of inhumane global governance.
]]>GCHQ and the NSA developed an automated system named TURBINE, which “allow the current implant network to scale to large size (millions of implants) by creating a system that does automated control implants by groups instead of individually”.
]]>