Queen Darknet

For a few years now I’ve had a “muse”. How to build the infrastructure portion of a Skynet or the communications system used by a Borg Queen cluster. (As there are multiple Queen copies, they need some way to coordinate).

I’m pretty sure I’ve worked out most of the ‘hard bits’ (but there are a few I’ll not be sharing here – don’t really want a “Skynet Moment” in our future…)

However, that same musing provides a basis for a distributed, secure, encrypted sharing of files and compute functions. One that if properly done would be very hard to stop, shutdown, or really even detect. Open, yet hidden.

The ideas behind a Darknet are not new. This just automates much of it and suggests directions to make it more universally available. Some organizations promote such things, for example: http://www.darknet.org.uk/ Much of what is presently done is as a “peer to peer” network of friends: http://en.wikipedia.org/wiki/Darknet_(file_sharing)

The term darknet refers to any private, distributed P2P filesharing network, where connections are made only between trusted peers — sometimes called “friends” (F2F) — using non-standard protocols and ports. Darknets are distinct from other distributed P2P networks as sharing is anonymous (that is, IP addresses are not publicly shared), and therefore users can communicate with little fear of governmental or corporate interference. For this reason, they are often associated with dissident political communications, as well as various illegal activities. More generally, the term “darknet” can be used to describe all non-commercial sites on the Internet, or to refer to all “underground” web communications and technologies, most commonly those associated with illegal activity or dissent.

But such techniques are not generally ‘user friendly’ enough for the average non-technical person. It would work better to have a bit of ‘open source download’ software that could just be installed on a computer and automagically join a darknet. Yet that has opportunities for Agency penetration and attack, so it needs some infrastructure behind it.

Recently several world governments have decided to shutdown various parts of the open internet. We have the USA attempting to pass legislation that allows the recording and movie industries to shutdown websites at will. We have China with internal controls, working to gain even more access to what is shared, and greater blocking of what is not “approved”.

There are more (including Megaupload being sued by the Department of Justice and the Megaupload suit against Universal, with Google on the side via YouTube). Basically, BOTH the Rapacious Capitalists and the Oppressive Socialists / Communists don’t like free people doing free things. The use of Twitter during recent Arab Spring events did not escape Government Notice either. The US Government has issued a request for software to dredge through ALL such social networking communications for patterns that might indicate incipient “problems”.

It’s all about control, and all big governments and big corporations want control.

The notion that every single Tweet and potentially every single email will be snooped, logged, recorded, and used against the involved individuals, is in the process of implementation. That means it is time to “go dark” and move those communications inside of encrypted systems.

So, IMHO, the time has come to share an idea. To turn it free and let it go where it will go, come what may.

It is NOT a finished product, nor even a finished specification. It is an architectural vision. There are many design details to work out, and a boat load of programming to make it real and running. The ‘good news’ is that large parts are already coded, so mostly just need integration.

One of the major areas needing ‘improvement’ is the area of vulnerability testing. There are very large forces opposed to free people doing what they wish, so you can be assured that any tool that facilitates that will be attacked. And this IS just such a tool. It can be used for secure computing and information sharing by ANY GROUP. That would include Mafia, Terrorists, oppressive governments, secret services, you name it.

That has been my major reason for not “sharing” until now. But it can also be used by oppressed peoples yearning to be free, by folks in a new holocaust to show their plight, and even by Free People just wishing to be free and communicate private things in private.

Once it exists, there will be opportunities to create anew all the things, like email providers and twitter and google and…) inside this new Dark World. So there will be plenty of opportunities to make money off of the usual scams of spam and advertising, but the problem is that all those thing will need to be recreated.
Yin meet Yang…

At the same time, times have changed from when I first thought of this. Encryption and distributed computing are nearly universal now. High speed encrypted networks (such as VPN Virtual Private Network) are common. Even browsers have built in secure connections (httpS type connections). It is “only a matter of time” now, and IMHO not a very long period of time, before someone else publishes anyway.

OK, with that preamble, what IS this idea?

Conceptually, it is pretty simple. I could be described from top down or bottom up. The first is easier to ‘vision’, but the second is easier to see how it work. So I’m going to blend them a bit.

Top Down

At the top level, it is a distributed network based virtual machine cluster, running on many shared systems, and communicating over a hidden Virtual Private Network, with data stored on a distributed cryptographic file system.

No part lives on any ONE machine. Most parts are located on many machines. You can shut down parts, or break connectivity, and the virtual machine just shifts where it is doing its distributed computing and where it gets any particular encrypted data blocks. (Think of it as a giant RAID – Redundant Array of Inexpensive Disks).

Yes, there would be a large amount of encrypting and decrypting, and a fair amount of network traffic. In the days of a 386 CPU and dial-up networks, it wouldn’t work fast enough for most folks. (I’d originally thought about this, then, and figured it would take machines inside organizations with T1 type internet connections 1.5 Mbit. Now we have faster than that speed coming into private homes.)

So we can have many private computers, all contributing some compute resources and some storage space to a Virtual Machine that exists nowhere, and everywhere, that does the actual file sharing.

The individuals can then have their real physical machine “ask” the Distributed Virtual Machine for services. The “open” communication all happens between your laptop / desktop machine and the PART of the DVM that lives on your box. The only communication that is ‘in the clear’ and can be intercepted is that from your keyboard / screen to and from the DVM on your machine.

Your DVM node then communicates, via VPN connections to other nodes. What is visible on the public service network is a series of encrypted block transfers that seem to originate from random places and go to random places. What is more, cutting ANY of the connections does not stop the DVM from running nor does it stop the data from existing nor from moving. It is like losing a disk in a giant RAID, it just keeps going. Or losing a node in a giant High Availability Computing Cluster, that also just keeps on going.

Nonstop computing. Nonstop Data.

You can kill it by shutting down ALL the nodes. However… When they come back up, they reconnect and start running again. In essence, you must PURGE all the machines that contribute resources. Since the individuals who own those machines are not inclined to purge them, that will ‘be a problem’…

To prevent ‘polluting the compute pool’ via a broken or deliberately compromised node, several “Drone Nodes” can be told to store or compute the same things. Their results get compared inside the DVM and if one of the nodes is divergent, it can be flagged as suspect (or perhaps. in a bit of puckishness, assigned to a ‘rogue pool’ that gets spun out to let the Agencies go “play with themselves”…)

In theory, you can kill it by blocking all file / data block transfers that are encrypted ( i.e. not ‘clear text’) but there is a counter to that (it consumes about 80% more resources, so it not a preferred mode of operation, but is very possible). The problem with blocking all encrypted block transfers is that it also ends up stopping all https web browsing and all private use of VPNs. An ‘area for development’ would be assuring that any inter DVM block transfers did not present an obvious ‘signature’ allowing selective identification.

The ‘work around’ for blocking encrypted transfers or ‘open transfer only’ is to simply embed the encrypted data in steganographic blocks. Shipping what look like personal photographs, or compiled programs, but have hidden data inside them in a way that is not visible nor provable. This adds overhead, so is not the preferred method, but as Moore’s Law doubles compute capacity every 18 months, it’s really just a 3 to 5 year “issue”…

Bottom Up

We have at core a Distributed Cryptographic Networked Machine ( DCNM is in use by CISCO to mean something else, so my use here would be ‘idiosyncratic’. I also like DCM that echoes the Department of Civilian Marksmanship for Distributed Compute Machine. Folks will need to decide what to call this beast. For now I’m using DVM Distributed Virtual Machine that is also a Doctor Of Veterinary Medicine. Sigh, we’re running out of TLAs – TLA being Three Letter Acronym or Three Letter Agency, depending on context )

At the bottom of this DVM is the use of Domain Name Service DNS Most attempts to shut down services depend on changing DNS entries to ‘erase’ the offender. (DNS is one of the few places where there is any ‘central control’ on what is an official Domain Name, but it is not FORCED to be central, so we exploit that distributed ability)

The Individual Private Machine (IPM) node must look to the DVM node running on it for the first level of DNS (at least, for any files or data that are ON the DVM network). As part of the installation of the DVM, the DNS of the IPM host is pointed to that Virtual Machine as top level DNS. This lets the DVM have an internal DNS that knows where all its parts are located without resorting to the external (and thus government controlled) DNS and it lets the IPM use that DNS to gain access to those blocks.

One open issue is how to segment the DNS name space for DVM private use, IPM use, and specific shared services. A way for “anonymous publication of membership” needs to be worked out that is both distributed, but resistant to spoofing. (Likely involving a private key verification for any publication. Basically, I could ‘join’ and ask for .emsstuff to be under my control. I’d give a key to the DNS server (once it found that .emsstuff qualifier available) and gave me authority. Any FUTURE changes would require me to present my key again, perhaps with a public/private key redirection in the middle to prevent captured key attacks. At that point any .emsstuff would be from my key (though not necessarily my public IP address nor machine) and, if found to be bogus, it all tracks back to my key. So intrusions and spoofs get self limited (or handed over to the spoofnet machine with all the other attackers…)

WHATEVER IP number your machine is assigned, gets communicated to your DVM and thus gets incorporated into the Virtual Cluster. (I’d prefer to avoid using the MAC address, if possible. In theory a snoop would at most find out what machines were participating in the Darknet by MAC address, but it would be nice to find a way to obscure even that information as it is one of the few bits that map to particular hardware). There will be some issues to work out involving NAT Network Address Translation and how that gets handled, but it really ought not to be much of a problem. (It provides a ‘spoofing’ opportunity in that the Virtual Cluster if it picks up your Real IP number from the public network side of the communications can then have a government machine ‘slide in’ in front of you and take your IP to try to participate ‘inside’. I think that’s not much of an issue, but needs proving.) If need be, one could implement a new level of DNS that only exists on the private side inside the DVM such that it could not be spoofed or intercepted.

OK, at that point you have a DVM node, it has secure private DNS, and it wants to join others. Part of any DVM node is storing shared data blocks in a secure form. This is done via an encrypted file system. Blocks are only decrypted inside the memory of a local machine and “open copies” of data are only stored in the IPM private computer open space after the individual chooses to download them from “wherever” they are stored physically.

That is, if you download Beethoven’s 9th it only is a visible MP3 file after the download is decrypted and presented to your Open Side desktop. There are many cryptographic file systems available, and most likely it will take an integration of a couple of them to make things robust enough for resistance to wide spread attack. XtreemFS has built in cryptographic and distributed file support. An older one that I’ve used as the base of my ‘muse’ is TCFS the Transparent Cryptographic File System But actual choice can be implementation dependent.

The key point here is that multiple copies of any block of data must be kept on multiple machines. That way any time a file is accessed, the particular blocks in question may come from any of several machines. No individual knows what is stored in the encrypted blocks that they contribute to the DVM Cluster when they join ( they just provide ‘free space’ to the cluster that decides what gets stored where). Because of this, no individual can choose to ‘pollute’ or ‘corrupt’ any particular product, nor detect where any particular file is actually located. This provides both “denyability” to the participating individuals along with more robust file service.

In an ideal case, at least 2 copies of each block would be retrieved from 2 different locations and compared. As long as they agree, they could be accepted (one might want the ability to set a higher count number when under attack) and if there is a mis-match, more block copies are retrieved and compared until it is determined which blocks / DVM Node is compromised, then it can be isolated.

Computing is also distributed to various nodes. As all this happens INSIDE the DVM, it is hidden from inspection and even which node is doing that fetch / compare process can be ill defined. Your individual DVM Node might well be providing compute functions to OTHER downloads or messages, not just your own.

The code for distributed computing is already in existence, but would likely need enhancement and some customization. MOSIX is an example of a self configuring distributed Linux, Beowulf Clusters are another with more manual assemble that could be automated. Some work would need to be done to move them to a Virtual Machine base and layer that on top of an encrypted block transfer cryptographic file system. This is the place that, IMHO, will take the most work. Initial versions could be brought up ‘in the clear’ then moved to ever deeper levels of virtual and encrypted distributed function. PVM Parallel Virtual Machine is another part of the base code available.

In essence, we already have JAVA as an existence proof of a downloaded Virtual Machine, we just need to make one of those that Clusters like MOSIX and that works on top of encrypted layers (much like Java inside an httpS link).

Once the Virtual Machine is running, it stores data “everywhere and nowhere” in particular.

Some of the technical issues involved in keeping encryption secure can be found in the documentation at the TrueCrypt site: http://www.truecrypt.org/docs/

TCFS has the interesting feature of a ‘quorum’. Each node can have its own ‘key phrase’ and the total file system can only be decrypted when a quorum of users connect. By defining a few initial nodes as “Queen Darknet” nodes, they could form the initial DVM Machine and the repository for the Virtual Machine operating system. Then they could have a ‘shared encrypted secret’ and identify other Darknet Queens via an open exchange.

The candidate Darknet Queen would encrypt her copy of The Secret and present it to a detected DVM (via their public key). That cluster would then decrypt it (using their private key) and IFF The Secret Key matches, they could admit that Candidate Queen into the DVM cluster and share the OS file system. This level of implementation is optional, especially for early instantiations, but it would be nice to have something like this eventually. If a node were found to be “suspect” or “compromised”, it could be told a ‘secret’ that assigned it to a ‘special’ cluster, the “spoofnet” referenced above. This behaviour could also be based on a quorum system, but needs to have a way to protect against an attack via a government hoard of machines being presented as ‘mutually trusting’ and thus taking over traffic. I think this can be done with a pre-encrypted private secret, but there are likely better ways.

I could even envision a few Darknet Queens that would form a core cluster DVM which would then “anoint” Lesser Nodes as compute servers or storage servers. That is, your laptop might be ranked as “probably OK” when you install the applications, so you could contribute a GB of disk to the Virtual Filesystem that stored shared user data, but not the OS codes… and you could provide compute services to the user level workload (encrypting or decrypting OTHER Lesser Node shared blocks, for example).

This would make the whole system more robust to “spoofing” via Government Agencies putting up a bunch of Nodes and then inspecting memory contents and / or communications. Basically, they only get to see the user level, not the private level. I think that with proper design, this need could be avoided, but for initial instantiations especially it would be beneficial for a Darknet Queen Candidate to only have a Coronation via a trusted host mechanism that helps filter out spoofing attacking nodes. ( A derivation of this system leads to Skynet and a Borg Queen system via parasitism of involuntary nodes, so the direction of ‘involuntary recruitment’ is to be discouraged.)

In Conclusion

I think it is time for an Open Source Project (or perhaps a few private Shared Source Projects) to start building Darknet Queens, and start making a file sharing system built on top of Distributed Cryptographic File Systems, on a Distributed Virtual Machine, made from nodes that voluntarily self-recruit and provide a share of their disk space and compute power to this Virtual Machine. All communications over VPN Virtual Private Network links that are established as needed between the nodes. DNS first level provided by the DVM itself (perhaps as a unique system and probably with “illegal” qualifiers – that is, not just the .com and .net “approved” ones, but things such as .dvd or .download as desired)

All the parts exist. It ought not to take long to integrate them. Then would begin the long hard work of proving security and assuring continuity.

But such is the price of liberty.

Postscript: It is always possible that this system is already built and running. That’s part of the “problem” faced by darknets. How to ‘recruit’ new membership while staying invisible. There are brighter and better hacker than me out there, so this could easily be a ‘done deal’ and I’m just coming to it late. But I’ve not seen one like this, yet…

Subscribe to feed

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in Political Current Events, Tech Bits and tagged , , , , , . Bookmark the permalink.

17 Responses to Queen Darknet

  1. adolfogiurfa says:

    No way, the best is to have nothing to hide or trying telepathy :-)

  2. Jeff Alberts says:

    People will always have something to hide, for legitimate reasons.

  3. tckev says:

    A very interesting idea that I too have mulled over in the past. Your ideas go further than I did. Some of my thoughts were –
    Use the ready made Tor network for anonymity, and also add some Forward Error Correction within the data encryption would make a system that is very robust and secure. (Data Encryption with Turbocode or Reed Soloman FEC)
    There is a Linux distro called Tails (if only they’d fix it properly)
    https://tails.boum.org/index.en.html (I’ve mentioned it in T3 before)

    also SETI did some distributed processing on user desktop PCs.
    setiathome.berkeley.edu/sah_papers/CISE.pdf (opens as PDF)
    and has some reusable ideas.

    Good luck.

  4. tckev says:

    Two other areas that might be worth a look are –
    Gnutella as the basic peer to peer network
    and with many other current and proposed projects listed here –

  5. George says:

    1. Create your own “root” name server with your own top level domains. This can be done in about an hour with someone knowing how to configure dns servers. You simply need to be authoritative for “.”
    2. Allocate your own ip address space within that network. This means you get to use the entire space. Once packets are encapsulated inside a tunnel header, the IP address inside the tunneled packet is ignored by the transport layer.
    3. Interconnect as many nodes as possible using as many different modes as possible. This would include even running direct leased lines, hard wires between homes in a neighborhood, tunneled traffic over the “real” internet, etc.
    4. Develop your own routing protocol to make it more difficult for someone to disrupt your network.

  6. Mark Miller says:

    What you describe about a “distributed virtual machine” is an old idea that’s been around since the 1970s, from Xerox PARC, though I don’t think they implemented it back then. One of the ideas they batted around for what personal computers would be was as caches for the internet. In fact, the system itself would operate like the internet. The system software would be made up of a network of loosely bound, operational objects that would communicate via. message passing within a single machine. Thinking about this now, I’m thinking, “Oh, if only they had done it!” This originated out of Alan Kay’s work on object-oriented programming. He’s currently working on a prototype system that implements this idea, in a project called STEPS, at Viewpoints Research Institute. One of the things he says is that the computing that makes the system run is easily distributed to multiple machines. Each has their own minimal VM running on it. So even though the user sees a unified environment that looks like a desktop OS, the functionality that makes up the system may be distributed between multiple machines. It’s all operating under the covers. The user need not be conscious of how much the computing is distributed, or where each piece of functionality that makes the system run is located. The system software worries about that through a “publish/subscribe” framework. All of the “nodes” can be brought into a single system, if desired, without a rewrite of the software. The main ideas behind this are to make the system more responsive (in terms of computing capacity), and less brittle to loss of functionality.

  7. Mark Miller says:

    Actually, I think I limited the concept too much in that last sentence about “main ideas.” My sense is another part of it is to enable people to share information through functional objects, rather than “dead” data files, though I am not clear on how they (at VPRI) are addressing this concept yet. This would be so that both publisher and reader could control content in terms of access to it (for editing, copying, etc.), and presentation (how it appears to the reader). It would also allow more kinds of content to be transmitted. Right now, people are able to share text, bitmap images, slideshows, and video fairly easily, but it’s more difficult to share something functional, like a presentation, spreadsheet, or simulation, as part of a group conversation.

  8. George says:

    Another way to do it is with “walled gardens” of the entire 32 bit IPv4 address space interconnected with IPv6. One can reach any of the ip addresses in their address space but a portion are reserved for “portals” to other such “walled gardens” and the traffic between them is tunneled over v6.

    But the important thing is that this traffic does not need to always be “tunneled” through the existing internet. Over time various networks, where they appear in physical proximity, might exchange traffic by physical means such as a cable between two “cages” in a data center. Heck, one could even develop their own transport protocol to replace IPv4 or IPv6 completely. All it would take is a different driver for Linux.

  9. Ian W says:

    I was playing with systems like this in the 1980’s where processes could run as mirrored pairs with processes in other machines and with subsystems distributed across continents. The big problem as you point out is the communications and messaging between processes. This has the overt bandwidth problem, but it also flags up busy nodes that are apparently always interconnected. Just traffic levels from nodes could be sufficient to spotlight the dark network, even if it was difficult to get into. At the same time ‘cloud computing’ is starting to become commercial and in many ways what you are suggesting is a secure distributed, replicated cloud.
    However, as you say it is really time these were created as big brother will be looking closer soon — see:


  10. I do not have computer expertise, but I see the need for a darknet to protect information ASAP.

    About 30 years ago citizens of the old USSR could not obtain George Orwell’s novel, “1984.”

    The book is still available here, but many aspects of our government now remind me of the tyranny I observed in the USSR 30 years ago.

    If leaders of the Western scientific community and the US NAS, the UK RS and the UN IPCC continue to stonewall clear evidence of deception in the Climategate emails and documents, I expect that unpopular experimental observations on the origin, composition and source of energy of the Earth-Sun system will start to disappear, as well as access to classic novels like “1984.”


  11. Eric Fithian says:

    Names… Names…
    How about MYCROFT, as in Holmes, as in the computer central to The “Moon Is a Harsh Mistress”….?
    Recall “Mike” was comprised of many interconnected parts that only “came aware” when completely connected…!

    Sign me up for your Darknet! I have several boxen here which could be conscripted….

  12. George says:

    One of my favorite tunnel programs was CIPE. It sends packets over UDP relying on the encapsulated TCP connection to do re-sends in case of packet loss. it also did things like sent “bogus” traffic so that traffic analysis wouldn’t work. It would send packets even when there was nothing to send so you could not monitor a flow and get an idea of how active that “channel” was.

    Olaf’s site still exists but he clains OpenVPN is a better alternative:

  13. George says:

    By the way, CIPE won’t build with 3.0 kernels.

  14. I don’t like cloud computing, simply because I want to know where my data is and that it is backed-up properly – this is based on experience with old hardware, and I may need to change my mind on this in future. It does however seem custom-built for darknets. I won’t say that it is impossible for government or any other organisation to monitor all traffic through such a net, but it certainly seems a mammoth task to analyse all data to see if it is stegganographic or otherwise encrypted, so any discovery of a darknet will be mostly based on chance or investigations of people where the government (or others) already have suspicions of something.

    Even with all the encryption and hidden files, at base it really depends on whether you trust the people that you are communicating with, and whether they are in fact worthy of that trust. I think that any system will ultimately depend on the integrity of the people involved in it, and certainly in the UK there have recently been revelations of “sleeping policemen” in various organisations who have been passing data on to their superiors over decades. Such people will be difficult to spot.

  15. E.M.Smith says:

    As many have noted, there are lots of options on how to do the concepts. That’s part of the whole point. You must look everywhere to look anywhere…

    FWIW, one small victory. A court in the USA recently ruled that a person using Truecrypt could not be forced to divulge his passphrase. Not only is this a clear win for the holder of the information, but it also said that the police agency involved was unable to crack the encryption.

    So most of the time most of my “disk” is a TrueCrypt file that is left encrypted and closed. Some virus gets into the box or someone cracks into it, they see what looks like a large binary format file of no particular interest ( it is set to a common file type that’s expected to be a ‘bag of bits’ but not an executable, so viruses will not try to worm their way into a non-executable non-program…)

    I can move my “whole disk” over networks and reopen it at the other side, and if the network link is also encrypted, it’s ‘double encrypted’ in transit. Most of the time, when I open the “disk”, I press the little button that shuts off network access… so any hacker is blocked from view (though resident viruses, if any, are not blocked – watch that system monitor for unexpected program activity… and listen for disk accesses when you are doing nothing…)

    So someone takes my laptop, I just get a copy of the archived “bag of bits” from “out there somewhere” and no big deal… Someone grabs a copy of what’s “out there”, they have a nice useless chunk of bits. Oh, and things inside the bag of bits can themselves be encrypted containers… with different methods…

    At any rate, yes, the ideas have been around a long time, just now the available bandwidth and CPU make it practical.

    FWIW the next step is a RaspberryPi like CPU engine, OS on a SD card, working data on a USB “stick”. Now the working copy lives on a little “dongle” in a hole in the wall when not in use. The OS is booted “fresh” each time ( burn a new SD card before each boot). Someone takes the “box” they get about $40 of nice hardware and a generic Linux on SD card… Dongle is encrypted. Backup copy is “out there” and also encrypted… Have a nice day…

Comments are closed.