Source - Osservatorio Nessuno

Recent content in Blogs on Osservatorio Nessuno - Associazione per la promozione e la tutela dei diritti digitali

Italy's intelligence oversight committee (COPASIR) report on Graphite spyware raises more questions than it answers
The long-awaited report by the Italian Parliamentary Committee for the Security of the Republic (COPASIR) on the use of Graphite spyware is now public. Yet rather than clarifying events, the document raises further concerns about government accountability, the cost of surveillance, and the extent of state overreach. Below we highlight some of the most troubling findings. The full report is available on the website of the Italian Parliament (in Italian). COPASIR Graphite report TARGETING A HUMANITARIAN NGO FOR FIVE YEARS The report confirms that Luca Casarini, a prominent figure in the humanitarian NGO Mediterranea Saving Humans, was intermittently placed under surveillance for at least five years. Initially, traditional methods were used; under the current government, however, the highly invasive Graphite spyware was deployed. Mediterranea is no secretive organization: its search and rescue missions in the Mediterranean are conducted under the watch of international observers and journalists. This raises serious questions about what could justify such a prolonged and intrusive operation targeting an organization whose goals and methods are transparent and lawful. The mismatch between the invasive tools employed, the years-long duration of the operation, and the meager investigative results is alarming — and difficult to justify. AN INCONSISTENT GOVERNMENT NARRATIVE The Italian government has made conflicting and shifting statements regarding the spyware scandal. At first, it downplayed the existence of any victims; then, it acknowledged that contracts with the Israeli spyware vendor Paragon were still in effect. Later, it claimed the contracts had been suspended. According to Haaretz, Prime Minister Giorgia Meloni even contacted Netanyahu directly to seek clarification. The COPASIR report, however, offers a different account: it states that the contract was unilaterally terminated by the Italian intelligence agencies, contradicting the government’s earlier version of events. IS COPASIR TRUSTING GRAPHICAL INTERFACES? The report devotes considerable attention to the “traceability” of actions carried out using Graphite spyware: > Another legal issue related to the use of modern interception technologies, as > emerged during hearings, is that data are saved in non-deletable databases by > the public agencies using the system, unless the service provider is involved > in the deletion process (p. 23, translation from Italian) > According to the information gathered by the Committee during its inquiry, > every use of the Graphite spyware is logged: the operator must authenticate > with a username and password, and each operation is recorded in a database or > “acquisition log” and in an audit register. The database is located on the > client’s premises, collects information about the target, and is not > accessible to the company. The audit log, which is also hosted on servers > located at the client, records all operations and system accesses, including > technical access for maintenance or updates by the company. Furthermore, data > in the database can be deleted by the client, but audit logs cannot be altered > by the client. During site visits to the agencies on May 7, 2025, the > Committee specifically requested and obtained assurances that the contents of > the operations would not be accessible to Paragon Solutions. (p. 13, > translation from Italian) But such assurances are fragile. The absence of a graphical interface to modify audit logs does not guarantee their immutability. If the audit system is under the client’s control, anyone with sufficient access could tamper with it—especially given that several months passed between the initial media exposure (January 31) and the start of any formal investigation. Only an independent forensic analysis could credibly verify the integrity of these logs. One striking example is the case of journalist Giovanni Cancellato, officially deemed not to have been targeted, according to COPASIR. The explanation offered is that his device appeared in the detection results due to “false positives.” Yet his phone was not the only one affected as multiple members of his newsroom also showed signs of compromise. Is it reasonable to expect private citizens or NGOs to provide definitive “proof” in cases where the very nature of spyware is to remain invisible, leave no trace, and render verification nearly impossible? This dynamic shifts the burden of proof to the victim, while conveniently sparing those with full operational control and privileged access from further scrutiny. The naïveté of such claims would be ironic, if not outright offensive, given the double standards they reveal. Authorities would never settle for a graphical confirmation in lieu of a forensic audit if it were a citizen or activist under investigation. Yet here, the possibility of tampering isn’t even considered. Victims and overburdened NGOs (often assisting hundreds of cases worldwide) are expected to conduct even deeper forensic investigations, despite the abundance of circumstantial and corroborating evidence: multiple infected journalists, a Meta notification, and partial independent confirmation by Citizen Lab. AN EXTORTION-BASED BUSINESS MODEL One passage from the report is particularly alarming: > Phone numbers with an Italian prefix are not among those contractually > excluded from being subjected to surveillance through Graphite spyware. (p. > 18, translation from Italian) In plain terms: to prevent Italian citizens from being spied on by Paragon’s foreign clients, the Italian state must pay. The only way to exempt a country code from the list of potential surveillance targets is to negotiate and pay for a contractual exclusion. This reveals a business model based on extortion: if a government doesn’t pay, its citizens remain vulnerable to being targeted by foreign governments using Paragon’s tools. UPDATE – JUNE 10 In the meantime, Paragon has publicly responded to the COPASIR investigation, claiming it had offered technical cooperation to Italian authorities to verify whether its spyware had been used against Francesco Cancellato, and that it terminated its contract with Italy after the authorities declined to proceed. These statements should be treated with caution, as Paragon is a directly involved party with a clear interest in protecting its commercial image. However, on one point we agree: the investigation into the Fanpage journalists remains inadequate. And once again, we see conflicting versions from Paragon and the Italian government, raising more questions than they answer. CONCLUSION Once again, advanced surveillance tools are being used against activists and journalists, not just in the exceptional cases that are often cited to justify their deployment. As previous cases in Poland, Spain, Greece, and elsewhere have shown, often in democratic countries spyware is used to target dissenting or inconvenient voices—with enormous financial costs, dubious investigative outcomes, and serious risks to global cybersecurity. The Italian government must take responsibility, provide full transparency on the actual use of Graphite, and immediately end all contracts with companies like Paragon. The European Union should act—as urged by Citizen Lab—to restrict the use of such technologies, ban their production within Europe, and sanction the companies involved.
Patela: A basement full of amnesic servers
The Osservatorio has finally activated its first experimental nodes on a diskless infrastructure, physically hosted in our own space and designed for low power consumption. In this post, we explain the reasons that led us to work on this project and the implementation we developed. We are also releasing the source code and documentation for the software we wrote, which we will continue to improve and which we hope will be useful for replicating our setup.
Patela: A basement full of amnesic servers
The Osservatorio has finally activated its first experimental nodes on a diskless infrastructure, physically hosted in our own space and designed for low power consumption. In this post, we explain the reasons that led us to work on this project and the implementation we developed. We are also releasing the source code and documentation for the software we wrote, which we will continue to improve and which we hope will be useful for replicating our setup. MOTIVATIONS AND THREAT MODEL As we’ve often pointed out, running Tor nodes involves certain risks and fairly common issues—often due to the fact that, in the event of investigations or other problems, the authorities are not always competent enough to understand how Tor works, or, even if they are, they may still choose to act indiscriminately despite knowing they’re not targeting the actual party under investigation. There are many precedents: in Austria, in Germany, in the United States, in Russia, and likely many others. Some of our members have personally experienced that this can also be the case in Italy. In order to protect ourselves, our infrastructure, and the anonymity and privacy of our users, we must therefore consider a range of possible scenarios, including: * Attempts to disrupt our operations * Seizure of servers and data analysis * Physical compromise of the servers * Remote compromise of the servers In the first case, the main obstacle is the constant stream of abuse reports and complaints about the activity of our network. In order to ensure our operations, as we’ve mentioned in previous posts, we now manage our own network with IP addresses we own, routed directly to our physical location—minimizing, as much as technically possible (for now!), the number of intermediaries with access or control over our infrastructure. Since the beginning of the project, we’ve followed two main intuitions: the first is that by approaching classic problems in less conventional ways, we might find solid solutions and also pave the way for other organizations and projects (like buying a physical space!). The second is that, to keep interest alive, both among our members and externally, our activities must be fun, engaging, and innovative. NETWORK INFRASTRUCTURE As already mentioned, we operate a router at a datacenter in Milan that announces our AS and IP space. It is currently connected to our basement via an XGS-PON 10G/2.5G link, but our goal is to expand our presence geographically and, where possible, reuse the same network resources (we’ll share more updates on this soon). Network infrastructure diagram. One of the key elements of this architecture is that we only trust machines to which we have exclusive access. This means that even our main router at the MIX in Milan is considered potentially untrustworthy. For this reason, our main focus is on our datacenter in Turin. DISKLESS INFRA AND SYSTEM TRANSPARENCY System Transparency is a project originally funded by Mullvad for their VPN infrastructure, and now actively developed and maintained by Glasklar Teknik. The goal of the project is to develop a set of tools to run and certify transparent systems. The idea is as follows: first, system images must be reproducible—that is, ISO files or similar images should be bit-for-bit rebuildable by anyone from source code, in order to prove the absence of tampering (or backdoors). Second, everything needed to reproduce these images must be public and well-documented, as should the images themselves. Finally, at least two additional properties must be ensured: first, only authorized system images should be allowed to run; second, there must be a public, immutable list of all authorized images (a so-called transparency log). To summarize, in sequence, the concept of system transparency requires: * Building the system in a reproducible way * Distributing all materials and instructions needed for reproduction * Signing system images with securely stored keys * Submitting all signatures to a transparency log Among the various tools provided by the System Transparency project, stboot does most of the heavy lifting. The server boots from a minimal local image which, thanks to stboot, downloads and verifies the next stage: a reproducible and signed system image intended to run Tor (or, in the future, other services). Because of the signature verification, we can safely host these images remotely or in the cloud without major security concerns. stboot boot screenshot. Since the diskless system is still experimental—and although our goal is to make the entire infrastructure easily replaceable—we still run a few services and devices in a more traditional setup: * Maintenance VPN server on a separate network, used for remote administrator access. * DHCP server: only internal IPs are assigned from a private network range and used for debugging and maintenance. * DNS server: best practices from the Tor project recommend running a local DNS resolver to reduce the risk of traffic analysis via domain resolution. In a Tor connection, exit nodes are responsible for resolving domains. For this reason, we’ve set up our own local recursive DNS server. * HTTP server: used to serve the operating system image to the servers. * Managed switch: we’ve been gifted a beautiful managed switch! Many thanks to those who chose to support us in this way! PATELA The problem with diskless software is that it doesn’t remember anything—not even the few configurations or keys needed to function correctly. In our case, there are just a few essential configurations for a Tor relay: * Tor’s configuration file, /etc/torrc * The relay’s identity keys, /lib/tor/keys/* * Various network settings (IP and NAT) These configurations also need to be generated on the machine and updated later. For example, we need to be able to update the list of nodes that make up our family. Clearly, maintaining a separate OS image for each machine would be both impractical and hard to maintain. While tools like ansible-relayor exist, we chose a different approach based on practical and security considerations. We prefer each node to generate its own keys and manage its own configuration, and for the configuration to be applied in pull mode rather than push. In other words, the node itself periodically checks for configuration updates, and only the node should have control over its keys. This is an important security distinction: a compromised configuration server should not be able to affect the security of the node or its keys. And that’s how patela was born. Patela is a minimal software tool that downloads and uploads configuration files to a server. The server communicates network configurations (primarily assigning available IPs and the gateway via an API), and the client reads and applies them. All other files that would normally need to persist between reboots are encrypted locally using the TPM and then uploaded in encrypted form to the configuration server. This way, the configuration server never has access to the machine’s keys and cannot directly compromise it—except possibly through Denial of Service, such as distributing invalid IPs or corrupted backups. The system offers the following advantages: * Resistance to certain physical attacks, as encryption keys are stored in the TPM * Anti-forensic by default, since no data is stored on disks or persistent media * Nodes can be reset by reinitializing the TPM * In the future, we could perform remote attestation to certify that the running system images match those we published * With remote attestation, TPM unsealing could happen only if the system image is genuine, ensuring a strong chain of trust Logical diagram of patela. RUST AND TOOLS Since Tor is being rewritten in Rust, and will soon offer everything needed to replace the classic C-Tor implementation even for relays, we decided to align with this and adopt the same language for better future compatibility. During development, it became clear that a flexible toolchain was essential, especially given the need to use C libraries and support multiple platforms. While patela’s architecture isn’t conceptually very complex, making a piece of software like this maintainable over time, with so many components, is no easy task. Fortunately, the Rust ecosystem and its tooling have matured significantly in recent years, now offering almost everything we need. In practice, we aim to build both a client and a server, and we need: * A database, for the server * A library to download and upload resources, for the client * External libraries (TPM, mTLS, crypto, etc.) * Compatibility with multiple 64-bit architectures, for the client (arm64, x86_64) * For the client, the ability to cross-compile with the target system’s libraries Containers are often used to meet requirements like ecosystem consistency. However, our goal is to build a simple, lightweight architecture that works on low-power machines and can modify global network configurations. There are also several things we explicitly want to avoid, some due to personal preference, others based on our chosen constraints: * Splitting the project into many components, libraries, or subprojects, which leads to extra maintenance overhead * Client and server, while performing different roles, mostly rely on the same libraries and data structures * The compiled binary is the only required file: no packages or archives. And with System Transparency, dynamic linking wouldn’t help either—we would still need to rebuild the system image for every update * Writing a Makefile to cover all these goals would likely be longer and more complex than writing the project itself And here are the projects that helped us achieve all of this with just a few lines of code and relatively little effort: * cargo: the package manager that just works * clap: a solid foundation for building command-line tools * sqlx: a small, simple, and elegant SQL library * actix web: a well-known and mature web framework * constgen: lets us embed constants at compile-time, avoiding external archive management * cargo zigbuild: integrates the Zig compiler with cargo, making cross-compilation straightforward * rustls: pure joy compared to communicating directly with OpenSSL COMPILATION AND CONFIGURATION Once the patela binary is compiled—with all required assets included—it’s enough to copy it into the system image before signing and distributing it. As mentioned earlier, we use constgen, which makes it easy to generate compile-time constants, for example, embedding the client’s SSL certificates and the Tor configuration template. let server_ca_file = env::var("PATELA_CA_CERT").unwrap_or(String::from("../certs/ca-cert.pem")); let client_key_cert_file = env::var("PATELA_CLIENT_CERT").unwrap(); let server_ca = fs::read(server_ca_file).unwrap(); let client = fs::read(client_key_cert_file).unwrap(); let const_declarations = [ const_declaration!(pub SERVER_CA = server_ca), const_declaration!(pub CLIENT_KEY_CERT = client), ] .join("\n"); Although cargo supports cross-compilation, when using external C dependencies like tpm2-tss, we must ensure that the libc used during compilation is compatible with the one present in our system images. As mentioned earlier, the Zig compiler—integrated directly with cargo via cargo-zigbuild—allows us to specify the target libc version, along with the architecture and kernel: $ cargo zigbuild --target x86_64-unknown-linux-gnu.2.36 NETWORK We’ve previously discussed how to run multiple Tor servers on the same network interface with different IPs using a high-level firewall like Shorewall. Now we’ve applied the same rules using nftables, the native Linux interface for writing network rules. Two rules are required: one to match packets by user ID, and another to configure source NAT. $ nft 'add rule ip filter OUTPUT skuid <process id> counter meta mark set <mark id>' $ nft 'add rule ip filter <interface> mark and 0xff == <mark id> counter snat to <source ip>' The rules are therefore applied directly by patela, using the nftl library. BISCUITS, MTLS AND TPM We use Mutual TLS (mTLS) for communication. Unlike traditional TLS, where only the client verifies the server, mTLS involves mutual authentication. This offers a double benefit and elegantly solves multiple problems: on the one hand, TLS natively provides transport security and certificate revocation mechanisms; on the other, client certificate authentication allows us to uniquely identify each node or machine. We can then save and track metadata such as the assigned IP, name, and other related info in the server’s database. The main drawback of mTLS is managing certificates and their renewal. Everyone gets one bold choice per project — otherwise, where’s the fun? Ours was Biscuit, a token-based authentication system similar to JWT. The only reason we use a session token is to avoid authenticating every single API endpoint. On the client side, the magic lies in the TPM. Libraries and examples are often sparse, and working with TPMs can be frustrating. In our case, we need a key to survive across reboots, because it’s the only persistent element on the server. We use a Trust On First Use (TOFU) approach: on first boot, the client generates a primary key inside the TPM and uses it to encrypt a secondary AES-GCM key, which is used to encrypt configuration backups. The AES-GCM key is then stored on the server, encrypted with the TPM. This means only the physical node can decrypt its backup. To revoke a compromised node, we simply delete its encrypted key from the server. The logic for detecting whether a TPM has been initialized runs entirely on the client and is implemented in patela. In the future, directly integrating relay long-term keys with arti could be a better and more efficient solution, especially if combined with a robust measured boot process to control unsealing. FUTURE WORK This post summarizes what we consider just the first phase of our project. We know there’s still a lot to do and improve, and we’d like to share our current wishlist: * Open source network stack: both in our Milan exchange point and in our datacenter we use routers running proprietary software (Mikrotik CCR2004-1G-12S+2XS and Zyxel XGS1250-12). We’d love to convert both to open source software on open hardware. * Open source Cantina-OS: for now we’ve only published the patela code, but we’ll soon release our OS image configurations too. * Better certificate management: right now, we recompile patela for each physical node to embed the correct TLS certificate. We’d like to switch to a shared client certificate and move authentication to the TPM via remote attestation. This would simplify update workflows and enable attestation even for virtual machines. IN PRACTICE We’ve launched four exit nodes, running with patela and System Transparency: * murazzano * montebore * robiola * seirass All four are running on a single Protectli with coreboot, pushing over 1 Gbps of effective total bandwidth. The Protectli machine in our basement.
5x1000 Donation Campaign: Fiscal Year 2024
2024 was a year full of good intentions — and, incredibly, also of many concrete results! Some we had envisioned long ago, others emerged along the way. From the beginning, our goal was to experiment in order to increase the number of Tor exit nodes in Italy. To be honest, we haven’t deployed many yet, but we’ve laid solid foundations for the future: we became an Autonomous System (AS214094) we acquired IP addresses and we can now deploy fiber across Italy In short, we’re very close to becoming a full-fledged network operator.
5x1000 Donation Campaign: Fiscal Year 2024
How to insert Fiscal Code inside 5x1000 2024 was a year full of good intentions — and, incredibly, also of many concrete results! Some we had envisioned long ago, others emerged along the way. From the beginning, our goal was to experiment in order to increase the number of Tor exit nodes in Italy. To be honest, we haven’t deployed many yet, but we’ve laid solid foundations for the future: * we became an Autonomous System (AS214094) * we acquired IP addresses * and we can now deploy fiber across Italy In short, we’re very close to becoming a full-fledged network operator. But Tor has never been our only goal. We’ve never set boundaries for ourselves, other than doing what we believe is right and what we genuinely enjoy. Over the past year, we have: * contributed to investigative journalism * conducted independent research on surveillance in Italy In the coming months, our infrastructure dedicated to Tor will be ready, based on a fully open source and reproducible architecture. We’re working on several research projects that we plan to publish soon, and we can’t wait to get out of our basement and share our adventures. YOU CAN SUPPORT US TOO Right now, there’s an easy way to support us at no cost to you. When filing your taxes in Italy, simply list our tax code as your 5x1000 beneficiary: 97871010019 Thank you for being with us — and for believing in what we do.
A deep dive into Cellebrite: Android support as of February 2025
In the previous blog post, we summarized part of the Cellebrite product history, and grasped some insights on the market of surveillance software and equipment aimed at mobile forensics. In this blog post, we will explore the current unlocking capabilities, as per their February 2025 documentation distributed to customers. We’ll also introduce some concepts and terminology, and hint at basic mitigations, though proper follow up will come in subsequent posts.
A deep dive into Cellebrite: how it came to be
The widespread use of Cellebrite software is not a secret: we mentioned it very recently, as did Amnesty in their recent report covering abuses towards civil society and detailing vulnerabilities exploited. This blogpost is the first in a series of two: here we’ll explore the general history and development of the company, while in the second one we will provide a more detailed list of its capabilities updated as of February 2025.
A deep dive into Cellebrite: Android support as of February 2025
In the previous blog post, we summarized part of the Cellebrite product history, and grasped some insights on the market of surveillance software and equipment aimed at mobile forensics. In this blog post, we will explore the current unlocking capabilities, as per their February 2025 documentation distributed to customers. We’ll also introduce some concepts and terminology, and hint at basic mitigations, though proper follow up will come in subsequent posts. For a very detailed and insightful write-up to help understand Android encryption, read “Android Data Encryption in depth” by Quarkslab or watch the their RECON23 talk. GLOSSARY To understand the following content, it is useful to recap some of the terminology generally used in the field: * Cold: Powered off device, with content likely to be encrypted at rest. * Warm/Hot: Powered on device, likely in AFU state or unlocked. * File Based Encryption (FBE): A method of encrypting individual files rather than the entire disk, allowing different encryption keys for different files based on user authentication and device state. * Full Disk Encryption (FDE): A method of encrypting the entire storage of a device requiring the PIN or passphrase to decrypt upon boot. Used mostly before Android 7, deprecated from Android 13 onwards. * After First Unlock (AFU): A powered-on device that has been unlocked at least once after boot. * Before First Unlock (BFU): A powered-off device or a powered-on device that has not been unlocked since boot. * Trusted Execution Environment (TEE): A secure, isolated environment within a processor that handles cryptographic operations and protects sensitive data from the main operating system. If no secure element is present, it is the main security context. * Secure Element (SE/eSE/iSE): A hardware security chip, supported from Android 9 onwards, that implements cryptographic operations and key storage in a dedicated hardware component, such as the Titan chip. It is complementary to the TEE and not a requirement to run android or FBE. * Keystore: A system service in Android that provides a secure way to store and manage cryptographic keys. It ensures keys are not accessible to user-space applications and can only be used in secure operations, such as encryption, decryption, and signing. Keystore supports hardware-backed security when available. * Keymaster: A lower-level component that works with Keystore, handling cryptographic operations within a Trusted Execution Environment (TEE) or secure element module (e.g., Titan M/Titan M2). * Secure Startup: A feature on Samsung Knox devices using FDE that encrypts the main key with the user PIN or password. If not enabled, user credentials are not actually required to decrypt the data. * Credential Encrypted (CE) Storage: A type of encrypted storage that is only accessible after the user authenticates, ensuring sensitive data is protected until the device is unlocked. * Device Encrypted (DE) Storage: A form of encryption that protects data even before the user logs in, but is accessible after boot without requiring user authentication. It is generally used for system-critical files. * Full File System Extraction (FFS): A technique used to obtain a complete copy of a device’s file system, after operating system decryption. Analysis could also allow the retrieval of deleted files. * Brute Force (BF): Brute forcing PINs and password varies a lot depending on software and hardware implementations. The worst case is the extraction of encrypted keys and offline bruteforcing. Other cases include bypassing throttling on the TEE or secure elements for online bruteforcing. * Security Patch Level (SPL): Indicates the date or version of the latest security updates applied to a device. For more information, read the GrapheneOS forum post discussing July 2024 Cellebrite capabilities, and the description of their encryption systems from Samsung Knox. All modern Android devices have a TEE. Some devices, for instance Google Pixels, can have an extra secure element, like the Titan M. Despite Google’s hardening efforts, exploitation is incredibly difficult, and vulnerabilities are very costly, but not impossible. THE FEBRUARY 2025 SUPPORT MATRIX Cellebrite, as far as we know, publishes a support matrix for Android-based and iOS-based devices monthly or at least multiple times a year. The latest version available at time of writing is version 7.73.1 released on February 2025. Below is a description of the automated process. Clearly, if the phone is unlocked or if the PIN or password are already known, the process is straightforward. High-level description of the automated process for obtaining file system dumps Unlocking non-flagship devices is usually easier for several reasons: * Many manufacturers fail to promptly release security updates, or in some cases, never release them at all. * Different processors have varying security stacks and hardening measures. A secure element adds significant protections compared to device with only a TEE, but it is not required and often not present. * It is well known that some MTK (MediaTek) processors are vulnerable to bootrom exploits and the whole trust chain can be compromised by anyone, including the TEE. Bootrom exploits are not patchable. * If a manufacturer has not invested in securing their devices, they may disable common security mitigations or introduce privileged software that weakens overall security. Support matrix, high-level per chipset and manufacturer As seen in the slides, there is a reason why almost every non-Pixel, non-Samsung device is considered unlockable, with only a few exceptions. Even using LineageOS, which typically ensures at least some level of security updates, does not make a significant difference if the underlying platform and binary blobs are not secure, which is almost never the case. Support matrix, distinction between cold and hot per manufacturer. Brute-force can take different forms. A root exploit could allow active bruteforcing, involving methods to bypass throttling or significantly speed up the process in various ways. A TEE exploit (or, for instance, a secure boot bypass as shown in MediaTek chips by Quarkslab) could allow for offline bruteforcing, making any numerical PIN shorter than 10 digits useless. This serves as a good reminder that a 6-digit PIN or pattern will always be cracked. Increasing the length and complexity, ideally by using a password instead, provides a meaningful layer of mitigation. Support matrix, MediaTek- and Exynos-based devices. As anticipated, if you have a MediaTek-based device, while it is still worth using a long password, there is practically no mitigation possible, except ditching it. Support matrix, older Pixel devices up to the 5/5a. Pixel devices remain quite a solid choice if kept updated. While it seems that for the standard Google ROM there are working exploits available to perform the FFS extraction in AFU state, on the contrary GrapheneOS additional hardening and protections are effective, and have been so since 2022. Brute force is generally not available, because the user credentials are stored in the the secure element, and thus cannot be extracted and active attempts are heavily throttled. Support matrix, newer Pixel devices until the 9. As you could have guessed, the more solid choice for an Android device is a newer Pixel, running GrapheneOS and with a strong password. Biometric authentication works differently, and it is generally not the main target of these attacks, meaning unless your threat model implies that you could be physically coerced to unlock your device, it is safe to use. That said, using a strong password is a minor daily inconvenience: you will only be prompted for it after a power cycle. Applications that offer separate passwords and encryption, such as Signal or many password managers should have it enabled to maximize protections in case everything else fails. CONCLUSIONS If you think you could be in a situation where your device could be seized, it is always wise to turn it off first. You should also change your PIN, if you use one, as soon as possible in favor of a strong password. The same applies if your device currently has no PIN or if you are using a pattern. If you think there is a high chance of you being eventually targeted, you should move to a Pixel device using GrapheneOS, even just a secondhand older one (from the 6a onwards). As we stated in the first blog post, this market is unregulated and is prone to countless abuses, as widely reported by Amnesty against civil society, but possibly in instances that could be harder to detect, such as surveillance connected to gender violence. While we do not currently have direct evidence of these occurrences, anybody with a license could be peforming these services, and we know they have been sold in the double-digit figures in Italy alone.
A deep dive into Cellebrite: How it came to be
The widespread use of Cellebrite software is not a secret: we mentioned it very recently, as did Amnesty in their recent report covering abuses towards civil society and detailing vulnerabilities exploited. This blogpost is the first in a series of two: here we’ll explore the general history and development of the company, while in the second one we will provide a more detailed list of its capabilities updated as of February 2025. Cellebrite is not alone in this market; other competitors, especially regarding exploitation, include Greykey (previously Greyshift), recently acquired by Axon and integrated into Magnet Forensics. A BRIEF HISTORY Our recent interest in Cellebrite arises from two key developments. First, we have seen a surge of activists asking us to examine phones that were returned after being seized. Second, sources familiar with the matter have confirmed that, following the recent Paragon scandal in Italy, Cellebrite has attempted to reassess its customer vetting practices. However, particularly in Italy, there have been longstanding reports of erratic sales policies and practices. To understand this, let’s take a step back: companies like Cellebrite operate in a largely unregulated market. While their primary customers are law enforcement agencies, their target customer base extends to a wide range of private companies and individuals. This raises significant concerns, as some of these customers may be operating outside the law—yet, in most cases, there is no oversight to hold them accountable. The line of products sold by the company has undergone multiple rounds of rebranding and structural changes over the past four years, particularly since Signal published their research about it. Cellebrite has been in the market for a long time, initially focusing on dedicated hardware for efficient forensic extraction from mobile devices, even before smartphones existed. If you buy any of their older devices, you might be surprised to find that even end-of-life products, such as the relatively recent UFED Touch 2, still include cables for extracting data from Nokia 1100 phones. While forensic analysis has always been central to their business model, the need to unlock devices and bypass cryptography is a relatively recent development. Outdated versions of Cellebrite hardware are sold all over the internet. While, as far as we know, Cellebrite used to ship known exploits or unlocking methods for older models, the introduction of mandatory file-based encryption (FBE) and secure elements has shifted the approach to mobile security. Rumor has it that, while there was already demand in the market and some companies were involved, many were primarily repurposing jailbreaks, known chipset vulnerabilities, and similar techniques to achieve their goals. However, checkm8 changed the game for the entire industry—it made it possible to unlock all iPhones from the 4S up to the X, regardless of the software patch level. With these tools freely available and easily accessible, competitors—and even law enforcement agencies themselves—could perform the unlocking and extraction without relying on specialized vendors. As the market became more competitive, companies realized they needed to step up their game to retain and expand their customer base. This led to a significant increase in research efforts focused on acquiring and developing 0-day and 1-day vulnerabilities. While the exploits required for these types of attacks are often very different from those used in spyware infections, since they rely on hardware access rather than message-based, browsing-based, or similar entry points, they are not necessarily simpler, especially for Apple devices. For Android, being based on Linux, things seem relatively more accessible due to a larger attack surface. This includes a wide range of USB-based drivers that have undergone little to no scrutiny over the years and are still shipped by default, often remaining loaded even when a phone is locked. Recent reports from Amnesty have highlighted these vulnerabilities. However, advanced mitigations, such as memory tagging and other new security features on Pixel phones, are making exploitation increasingly difficult. DISTRIBUTING EXPLOITS IS SUCH A DIFFICULT JOB And here we arrive at the second major issue in this market: previously, Cellebrite kept its most valuable vulnerabilities private, requiring customers to physically ship the devices they wanted unlocked to certain locations. Additionally, the exploitation process was largely manual: human analysts would identify devices and attempt to determine the most suitable unlocking and extraction method if it was not already covered by the suite provided to customers. This approach proved problematic for several reasons. First, some customers were unwilling to send their devices across borders, as it could compromise the chain of custody. Second, in most democracies, defendants have the right to be present during non-reproducible forensic analyses that might alter evidence, something inherent to active exploitation attacks. Lastly, scalability was a major limitation, both due to the human effort required and the delays caused by shipping. From a business standpoint, an automated solution would have benefited both Cellebrite’s operations and its customers. But this presents a fundamental challenge: if Cellebrite simply shipped its exploitation capabilities with their software suite, even with extensive obfuscation, skilled researchers could quickly reverse-engineer their vulnerabilities and expose them. This could happen either out of a commitment to security and democratic principles or, ironically, to collect bug bounties. As far as we know, Cellebrite has long used custom-made cables and, more recently, has attempted to enhance its capabilities with single multi-purpose adapters required for deploying their attacks. These hardware dependencies also make it more difficult for cracked versions of their suite to proliferate—something we have firsthand witnessed as being far from uncommon. However, as long as the core logic had to run on the customer’s computer—or even on a dedicated tablet, as marketed in previous generations—it remained relatively easy to extract and analyze. Here comes the idea: since tablets and similar devices were highly impractical and had limited resources, while current Inseyets software might require up to 128GB of RAM and 24-core Threadripper CPUs to run smoothly on large pieces of evidence and probably to support their “magic AI” (even generative AI, which, of course, would never hallucinate—especially in cases where people’s futures are at stake), they opted for a hybrid approach. Instead of moving the entire suite to a dedicated appliance, Cellebrite decided to keep the main software running on the customer’s hardware. However, for selected customers permitted to purchase the Premium package (or whatever it is currently called—whether Premium Advanced Access, Advantage, Advanced Logical, or perhaps Premium ES, also known as Mobile Elite), they now ship a specialized embedded device: the Turbo Link. Screenshot from the Turbo Link marketing brochure. As we will see in the next post, the new software suite, along with the Turbo Link adapter, fully streamlines exploitation and extraction. The exploits likely never transit in cleartext on the customer’s machine and are only delivered when necessary, after the target device has been identified. From the limited pictures available, the Turbo Link appears to be actively cooled and includes an HDMI port, which requires a secondary cable from the kit to connect to the target device. A reasonable assumption about this setup is that exploits are likely end-to-end encrypted from Cellebrite’s servers to the Turbo Link device, which itself probably has to guarantee a certain level of both software and hardware integrity. According to Amnesty, the use of an HDMI port is to simplify the emulation of media peripherals, often used for exploitation. On the other side, it is connected via USB to the customer’s computer. However, for customers who wish to scale beyond their contracts—or for those needing to unlock devices while being offline—Cellebrite also sells licenses for a centralized network server called the Enterprise Vault Server (EVS). According to the brochure and their promotional videos, it “supplies the resources necessary to gain access to advanced iOS and Android devices.” CONCLUSIONS We have explored some of Cellebrite’s history and business structure. In the next post, we’ll take a more practical look at how this translates into their current capabilities. In the coming months, we will publish mitigation strategies and guides to help you and your communities defend against these types of threats. We’ll reiterate again: we strongly believe that vulnerability trading is intrinsically unethical, as it fuels a vicious cycle of insecurity that puts everyone at risk. If you have information that could complement this piece, feel free to contact us. If you own old Cellebrite hardware that is end-of-life, or that no longer has a valid license, thus no longer useful, we are collecting older surveillance technology for archival purposes (more on this soon) and we will happily collect your e-waste. Visit the contacts page to discover how to get in touch with us securely.
Cellebrite and the routine use of digital surveillance in Italy
In recent years, authorities in several countries have intensified their use of digital surveillance tools to access mobile devices, often without proper adherence to legal procedures and without the informed consent of those affected—sometimes in blatant violation of existing laws, as demonstrated by the current Paragon and Graphite case. Osservatorio Nessuno recently assisted members of the No CPR Torino assembly (a collective opposing Italy’s Centri di Permanenza per il Rimpatrio, detention centers for migrants awaiting deportation) in discovering that their phones had been unlocked and forensically analyzed using Cellebrite tools, without being given sufficient prior notice to allow their legal consultants to verify that the procedure was conducted lawfully.
Cellebrite and the routine use of digital surveillance in Italy
In recent years, authorities in several countries have intensified their use of digital surveillance tools to access mobile devices, often without proper adherence to legal procedures and without the informed consent of those affected—sometimes in blatant violation of existing laws, as demonstrated by the current Paragon and Graphite case. Osservatorio Nessuno recently assisted members of the No CPR Torino assembly (a collective opposing Italy’s Centri di Permanenza per il Rimpatrio, detention centers for migrants awaiting deportation) in discovering that their phones had been unlocked and forensically analyzed using Cellebrite tools, without being given sufficient prior notice to allow their legal consultants to verify that the procedure was conducted lawfully. Archive photo of an extraction in progress using Cellebrite Touch 2 EVIDENCES OF CELLEBRITE USE A specific case concerns three phones that were seized on March 20, 2024, during an action at Malpensa Airport. The smartphones, protected by PINs and with encryption enabled, were returned with clear signs of compromise: two of them had their PINs written on a sticker on the back, an evident indication that they had been unlocked and analyzed. As far as we know, the devices were turned off and relatively up to date, requiring an unlock method that, according to Cellebrite’s own terminology, is classified as Before First Unlock (BFU), one of the most technically complex and expensive exploits to develop and acquire. Using the Mobile Verification Toolkit, we confirmed the compromise and found unequivocal signs of the use of tools from the Israeli company. The extracted files matched those analyzed in a recent Amnesty International report, which denounces the use of Cellebrite and spyware to monitor journalists and activists in Serbia. The connection to Cellebrite and its UFED/Inseyets service suggests that Italian law enforcement sent the devices to third-party consultants who, equipped with these tools, extracted data from the phones without adequately informing the individuals concerned or their legal representatives. We are currently in contact with international organizations and are assisting in an in-depth technical analysis. The results of this analysis will be published at a later date. CELLEBRITE AND THE 0-DAY MARKET: A THREAT TO EVERYONE It is not just the pervasiveness of surveillance that is concerning, but also the fact that Cellebrite is active in the 0-day exploit market, which takes advantage of unknown vulnerabilities in operating systems to bypass security measures. This type of trade undermines the security of all users, including governments and businesses, fueling a vicious cycle of cybersecurity threats. Osservatorio Nessuno strongly condemns the use of tools like Cellebrite against activists, journalists, and ordinary citizens. These tools, marketed as crime-fighting solutions, are instead often used to target political dissent and civil society. Furthermore, the companies that develop and sell them—particularly in the case of Cellebrite—are not required to follow any meaningful vetting process for their customers or users. As a result, they are often freely available for purchase by third parties, including phone shops, consultants, and private companies. As an organization, we are committed to defending the right to privacy and digital security for everyone, especially those most vulnerable to state surveillance. We will continue to monitor these developments and provide support to anyone who has been a victim of abusive surveillance practices. If you are an activist or journalist concerned about the security of your devices, or if your device has been seized and you do not have a technical consultant to assist you, contact us via Signal at this link or send an email to support@osservatorionessuno.org.
Updating Exit Policy and Contact Info for our (exit) relays
Contact Info changes # We have updated the ContactInfo field in the torrc configuration of all our relays to align with the proposed ContactInfo Information Sharing Specification. This standard defines a structured format for describing key attributes of a relay family operator. Ensuring operators are reachable and that relays are associated with trusted individuals or organizations is crucial for the health of the Tor Network. To enhance transparency, we have also published proof of relay ownership in our public .