Why Gopher
Use Cases
Overcome any networking challenge with Gopher
Gopher P2P
Gopher Proxy
Gopher Topo
Gopher Hybrid
Gopher Unpublish
Gopher PAM
Gopher SEG
Gopher WAN
Gopher SASE
Gopher Container
Gopher Private
Gopher Remote
Gopher BYOD
Gopher OPs
Gopher Router
Gopher Funnel
Gopher JITA
Gopher Email
Gopher DB
Pricing
Download
Research
Blog
Documentation
About
Contact
Sign In
Research Papers
Learn What Gopher is Built On
Gopher Security is built on top of many peer reviewed papers and protocols.
CRYSTALS-Kyber: The “Cryptographic Suite for Algebraic Lattices” to withstand attacks by large quantum computers
Kyber is an IND-CCA2-secure key encapsulation mechanism (KEM), whose security is based on the hardness of solving the learning-with-errors (LWE) problem over module lattices. Kyber is one of the finalists in the NIST post-quantum cryptography project. The submission lists three different parameter sets aiming at different security levels. Specifically, Kyber-512 aims at security roughly equivalent to AES-128, Kyber-768 aims at security roughly equivalent to AES-192, and Kyber-1024 aims at security roughly equivalent to AES-256.
Read paper
The Noise Protocol Framework: crypto protocols that are simple, fast, and secure
Noise is a framework for building crypto protocols. Noise protocols support mutual and optional authentication, identity hiding, forward secrecy, zero round-trip encryption, and other advanced features.
Read paper
Falcon: Fast-Fourier Lattice-based Compact Signatures over NTRU
Falcon is a cryptographic signature algorithm submitted to NIST Post-Quantum Cryptography Project on November 30th, 2017. It has been designed by: Pierre-Alain Fouque, Jeffrey Hoffstein, Paul Kirchner, Vadim Lyubashevsky, Thomas Pornin, Thomas Prest, Thomas Ricosset, Gregor Seiler, William Whyte, Zhenfei Zhang. The point of a post-quantum cryptographic algorithm is to keep on ensuring its security characteristics even faced with quantum computers. Quantum computers are deemed feasible, according to our current understanding of the laws of physics, but some significant technological issues remain to be solved in order to build a fully operational unit. Such a quantum computer would very efficiently break the usual asymmetric encryption and digitial signature algorithms based on number theory (RSA, DSA, Diffie-Hellman, ElGamal, and their elliptic curve variants). Falcon is based on the theoretical framework of Gentry, Peikert and Vaikuntanathan for lattice-based signature schemes. We instantiate that framework over NTRU lattices, with a trapdoor sampler called "fast Fourier sampling". The underlying hard problem is the <a href="https://en.wikipedia.org/wiki/Short_integer_solution_problem">short integer solution</a> problem (SIS) over NTRU lattices, for which no efficient solving algorithm is currently known in the general case, even with the help of quantum computers.
Read paper
Combining kTLS and BPF for Introspection and Policy Enforcement
Kernel TLS is a mechanism introduced in Linux kernel 4.13 to allow the datapath of a TLS session to be encrypted in the kernel. One advantage with this mechanism compared to traditional user space TLS is that it allows sendfile operations to avoid using otherwise expensive bounce buffers to do encryption in user space. Additionally, as of kernel 4.17 the Linux kernel has supported implementing socket based BPF policies by attaching SK_MSG programs to sockets. These can be used to monitor TCP sessions and enforce policies by allowing or dropping messages using an administrator supplied BPF program. However, until recently these features have not been allowed to coexist. Users had to choose between performance improvements offered by kTLS or applying BPF policies using SK_MSG programs. Perhaps worse, BPF policies operating with traditional TLS in place, like those supported by OpenSSL, had minimal visibility into TCP based messages due to receiving already encrypted traffic. In this paper we describe the new kTLS/BPF stack implementation and its user API.
Read paper
Runtime Security Monitoring with eBPF
From containerized workloads to microservices architecture, developers are rapidly adopting new technologies that allow organizations to scale at unprecedented rates. Unfortunately, fast mutating architectures are hard to keep track of, and runtime security monitoring tools are now required to collect application level and container level context in order to provide actionable alerts. This paper intends to explain how eBPF 1 has made it possible to create a new generation of runtime security tools with significantly better performance, context and overall signal to noise ratio compared to legacy tools like AuditD.
Read paper
Nonce-Disrespecting Adversaries: Practical Forgery Attacks on GCM in TLS
We investigate nonce reuse issues with the GCM block cipher mode as used in TLS and focus in particular on AES-GCM, the most widely deployed variant. With an Internet-wide scan we identified 184 HTTPS servers repeating nonces, which fully breaks the authenticity of the connections. Affected servers include large corporations, financial institutions, and a credit card company. We present a proof of concept of our attack allowing to violate the authenticity of affected HTTPS connections which in turn can be utilized to inject seemingly valid content into encrypted sessions. Furthermore, we discovered over 70,000 HTTPS servers using random nonces, which puts them at risk of nonce reuse, in the unlikely case that large amounts of data are sent via the same session.
Read paper
A Security Analysis of the Composition of ChaCha20 and Poly1305
This note contains a security reduction to demonstrate that Langley’s composition of Bernstein’s ChaCha20 and Poly1305, as proposed for use in IETF protocols, is a secure authenticated encryption scheme. The reduction assumes that ChaCha20 is a PRF, that Poly1305 is -almost-∆-universal, and that the adversary is nonce respecting.
Read paper
The Double Ratchet Algorithm
The Double Ratchet algorithm is used by two parties to exchange encrypted messages based on a shared secret key. Typically the parties will use some key agreement protocol (such as X3DH [1]) to agree on the shared secret key. Following this, the parties will use the Double Ratchet to send and receive encrypted messages. The parties derive new keys for every Double Ratchet message so that earlier keys cannot be calculated from later ones. The parties also send Diffie-Hellman public values attached to their messages. The results of Diffie-Hellman calculations are mixed into the derived keys so that later keys cannot be calculated from earlier ones. These properties gives some protection to earlier or later encrypted messages in case of a compromise of a party’s keys. The Double Ratchet and its header encryption variant are presented below, and their security properties are discussed.
Read paper
Coconut: Threshold Issuance Selective Disclosure Credentials with Applications to Distributed Ledgers
Ad hoc groups, such as peer-to-peer (P2P) systems and mobile ad hoc networks (MANETs) represent recent technological advancements. They support low-cost, scalable and fault-tolerant computing and communication. Since such groups do not require any pre-deployed infrastructure or any trusted centralized authority they have many valuable applications in military and commercial as well as in emergency and rescue operations. However, due to lack of centralized control, ad hoc groups are inherently insecure and vulnerable to attacks from both within and outside the group. Decentralized access control is the fundamental security service for ad hoc groups. It is needed not only to prevent unauthorized nodes from becoming members but also to bootstrap other security services such as key management and secure routing. In this paper, we construct several distributed access control mechanisms for ad hoc groups. We investigate, in particular, the applicability and the utility of threshold cryptography (more specifically, various flavors of existing threshold signatures) towards this goal
Read paper
Threshold cryptography in P2P and MANETs: The case of access control
Ad hoc groups, such as peer-to-peer (P2P) systems and mobile ad hoc networks (MANETs) represent recent technological advancements. They support low-cost, scalable and fault-tolerant computing and communication. Since such groups do not require any pre-deployed infrastructure or any trusted centralized authority they have many valuable applications in military and commercial as well as in emergency and rescue operations. However, due to lack of centralized control, ad hoc groups are inherently insecure and vulnerable to attacks from both within and outside the group. Decentralized access control is the fundamental security service for ad hoc groups. It is needed not only to prevent unauthorized nodes from becoming members but also to bootstrap other security services such as key management and secure routing. In this paper, we construct several distributed access control mechanisms for ad hoc groups. We investigate, in particular, the applicability and the utility of threshold cryptography (more specifically, various flavors of existing threshold signatures) towards this goal
Read paper
Sphinx: A Compact and Provably Secure Mix Format
Sphinx is a cryptographic message format used to relay anonymized messages within a mix network. It is more compact than any comparable scheme, and supports a full set of security features: indistinguishable replies, hiding the path length and relay position, as well as providing unlinkability for each leg of the message’s journey over the network. We prove the full cryptographic security of Sphinx in the random oracle model, and we describe how it can be used as an efficient drop-in replacement in deployed remailer systems.
Read paper
Cashmere: Resilient Anonymous Routing
Anonymous routing protects user communication from identification by third-party observers. Existing anonymous routing layers utilize Chaum-Mixes for anonymity by relaying traffic through relay nodes called mixes. The source defines a static forwarding path through which traffic is relayed to the destination. The resulting path is fragile and shortlived: failure of one mix in the path breaks the forwarding path and results in data loss and jitter before a new path is constructed. In this paper, we propose Cashmere, a resilient anonymous routing layer built on a structured peer-to-peer overlay. Instead of single-node mixes, Cashmere selects regions in the overlay namespace as mixes. Any node in a region can act as the MIX, drastically reducing the probability of a mix failure. We analyze Cashmere’s anonymity and measure its performance through simulation and measurements, and show that it maintains high anonymity while providing orders of magnitude improvement in resilience to network dynamics and node failures.
Read paper
Eclipse Attacks on Overlay Networks: Threats and Defenses
Overlay networks are widely used to deploy functionality at edge nodes without changing network routers. Each node in an overlay network maintains connections with a number of peers, forming a graph upon which a distributed application or service is implemented. In an “Eclipse” attack, a set of malicious, colluding overlay nodes arranges for a correct node to peer only with members of the coalition. If successful, the attacker can mediate most or all communication to and from the victim. Furthermore, by supplying biased neighbor information during normal overlay maintenance, a modest number of malicious nodes can eclipse a large number of correct victim nodes. This paper studies the impact of Eclipse attacks on structured overlays and shows the limitations of known defenses. We then present the design, implementation, and evaluation of a new defense, in which nodes anonymously audit each other’s connectivity. The key observation is that a node that mounts an Eclipse attack must have a higher than average node degree. We show that enforcing a node degree limit by auditing is an effective defense against Eclipse attacks. Furthermore, unlike most existing defenses, our defense leaves flexibility in the selection of neighboring nodes, thus permitting important overlay optimizations like proximity neighbor selection (PNS).
Read paper
Kademlia: A Peer-to-Peer Information System Based on the XOR Metric
We describe a peer-to-peer distributed hash table with provable consistency and performance in a fault-prone environment. Our system routes queries and locates nodes using a novel XOR-based metric topology that simplifies the algorithm and facilitates our proof. The topology has the property that every message exchanged conveys or reinforces useful contact information. The system exploits this information to send parallel, asynchronous query messages that tolerate node failures without imposing timeout delays on users.
Read paper
ZHT: A light-weight reliable persistent dynamic scalable zero-hop distributed hash table
This paper presents ZHT, a zero-hop distributed hash table, which has been tuned for the requirements of high-end computing systems. ZHT aims to be a building block for future distributed systems, such as parallel and distributed file systems, distributed job management systems, and parallel programming systems. The goals of ZHT are delivering high availability, good fault tolerance, high throughput, and low latencies, at extreme scales of millions of nodes. ZHT has some important properties, such as being light-weight, dynamically allowing nodes join and leave, fault tolerant through replication, persistent, scalable, and supporting unconventional operations such as append (providing lock-free concurrent key/value modifications) in addition to insert/lookup/remove.
Read paper
MHT: A Light-weight Scalable Zero-hop MPI Enabled Distributed Key-Value Store
In this paper, we propose and implement a key-value store that supports MPI while allowing application access at any time without having to declaring in the same MPI communication world. This feature may significantly simplify the application design and allow programmers leverage the power of key-value store in an intuitive way. In our preliminary experiment results captured from a supercomputer at Los Alamos National Laboratory, our prototype shows linear scalability at up to 256 nodes.
Read paper
Optimizing Load Balancing and Data-Locality with Data-aware Scheduling
Load balancing techniques (e.g. work stealing) are important to obtain the best performance for distributed task scheduling systems that have multiple schedulers making scheduling decisions. In work stealing, tasks are randomly migrated from heavy-loaded schedulers to idle ones. However, for data-intensive applications where tasks are dependent and task execution involves processing a large amount of data, migrating tasks blindly yields poor data-locality and incurs significant data-transferring overhead. This work improves work stealing by using both dedicated and shared queues. Tasks are organized in queues based on task data size and location. We implement our technique in MATRIX, a distributed task scheduler for many-task computing. We leverage distributed key-value store to organize and scale the task metadata, task dependency, and data-locality. We evaluate the improved work stealing technique with both applications and micro-benchmarks structured as direct acyclic graphs. Results show that the proposed data-aware work stealing technique performs well.
Read paper
FusionFS: Toward supporting data-intensive scientific applications on extreme-scale high-performance computing systems.
State-of-the-art, yet decades-old, architecture of high-performance computing systems has its compute and storage resources separated. It thus is limited for modern data-intensive scientific applications because every I/O needs to be transferred via the network between the compute and storage resources. In this paper we propose an architecture that hss a distributed storage layer local to the compute nodes. This layer is responsible for most of the I/O operations and saves extreme amounts of data movement between compute and storage resources. We have designed and implemented a system prototype of this architecture - which we call the FusionFS distributed file system - to support metadata-intensive and write-intensive operations, both of which are critical to the I/O performance of scientific applications. FusionFS has been deployed and evaluated on up to 16K compute nodes of an IBM Blue Gene/P supercomputer, showing more than an order of magnitude performance improvement over other popular file systems such as GPFS, PVFS, and HDFS.
Read paper