In February 2016, Alphabet announced that Google Ideas, its internal research unit focused on technology and conflict, would be renamed Jigsaw and repositioned as a technology incubator. As TechCrunch reported at the time, Eric Schmidt described the new name as reflecting a world of physical and digital challenges and a focus on collaborative problem-solving.

The rename marked a shift in how the group framed its purpose, from policy research and convening toward applied engineering.

Jigsaw's approach page describes the group as having spent more than a decade researching and developing technologies connected to digital security, conflict, and free speech. The work concentrates on parts of the internet that fail under adversarial conditions: websites knocked offline by coordinated traffic attacks, access blocked by national censorship infrastructure, online comment sections overwhelmed by abusive submissions, and search results exploited to direct users toward extremist content.

These are not marginal problems. They affect the reliability of election information, the safety of journalists operating in politically sensitive environments, and the practical ability of civic organizations to publish and communicate.

Jigsaw began inside Google as Google Ideas, led by Jared Cohen, before adopting its current name in 2016. The group sits within Alphabet's organizational structure and draws on Google's technical infrastructure, but its stated focus is on public-interest problems rather than commercial products.

The repositioning under the Alphabet umbrella coincided with a push toward building tools that third parties could deploy independently, rather than producing research alone.

Four projects illustrate that approach in practice: Project Shield, Outline, Perspective API, and the Redirect Method. Each addresses a separate failure point in how information is accessed, published, moderated, or sought.

Taken together, they reflect a consistent organizational focus on infrastructure rather than consumer-facing products, and on problems that conventional product teams at technology companies have limited incentive to prioritize.

Key Findings


  • Google Ideas was renamed Jigsaw in 2016 and repositioned as a technology incubator within Alphabet, shifting emphasis from policy research to applied tool-building.
  • Project Shield provides free DDoS protection to eligible news publishers, election monitors, and human rights groups by filtering attack traffic through Google's network infrastructure.
  • Outline helps individuals and organizations deploy self-hosted VPN servers, reducing exposure to censorship without relying on centralized providers that are easier to block. The software is open source and has been independently audited by two security firms.
  • Perspective API assigns probabilistic toxicity scores to submitted comments so human moderators can prioritize review rather than making automated removal decisions.
  • A 2018 RAND Corporation evaluation found that Redirect Method campaigns targeting extremist search terms produced click-through rates comparable to commercial advertising benchmarks, though RAND noted that click behavior does not establish attitude change.
  • Jigsaw publishes project documentation, open-source code, and independent audit records for parts of its work, allowing outside review of specific technical claims.

Infrastructure Under Pressure


A distributed denial-of-service attack, commonly abbreviated as DDoS, works by directing large volumes of automated traffic at a target server until the server can no longer respond to legitimate requests. The technique does not require exploiting a software vulnerability in the target system. It requires only the ability to generate or coordinate enough traffic to exceed the target's capacity.

For small organizations with limited server infrastructure, even a modestly scaled attack can take a website offline within minutes and keep it there for hours.

Project Shield responds by placing a protected site's traffic behind Google's own network infrastructure. Incoming requests pass through Google's systems, where traffic consistent with attack patterns is filtered before reaching the site's servers.

Google's News Initiative describes the service as free for eligible organizations, a category that includes news publishers, election monitors, and human rights groups. No technical changes are required on the recipient's servers beyond pointing their traffic through Shield's infrastructure.

In 2018, a DDoS attack took a Tennessee county election website offline during active voting. The incident is the kind of event that defines Shield's intended scope: an organization carrying a public accountability function, operating with limited technical capacity, and targeted at the moment when its availability mattered most.

Commercial DDoS mitigation services exist and are effective at scale, but enterprise-grade protection carries costs that most civic and journalistic organizations cannot sustain on an ongoing basis.

News organizations, election infrastructure, and human rights groups share a vulnerability profile that follows from their public role. They attract adversarial attention proportional to that role while operating with technical and financial resources that are generally limited relative to commercial organizations of comparable public visibility.

Project Shield's eligibility criteria are structured around that profile, concentrating the service on organizations that face the highest probability of targeting and have the fewest commercial alternatives.

More Technology Articles

Outline and the Decentralization of Circumvention


Outline addresses a related but structurally different problem. Network-level censorship, in which a government or network operator blocks access to websites or services by filtering IP addresses, domain names, or categories of traffic, does not attack a site's servers. It prevents users in a given region from reaching content that remains fully available elsewhere.

The effect for the user is the same: the content is inaccessible. But the mechanism and the appropriate technical response are different.

Virtual private networks address this by routing a user's connection through a server located outside the blocked network, allowing access to otherwise unreachable content. The limitation of standard centralized VPN services is that provider server addresses are identifiable and can themselves be blocked at scale.

Outline, released in 2018 and described on its FAQ, takes a different approach by helping individuals and organizations deploy their own private VPN servers rather than connecting to a shared provider. A newsroom or advocacy group can configure an Outline server without requiring specialized technical knowledge, and the server's address remains known only to those the operator chooses to share it with.

Because each deployment is independent, blocking one server has no effect on others running the same software.

Outline is open source, meaning its code is publicly available for inspection. The software has been independently audited by security firms Radically Open Security and Cure53, with published reports from 2018, 2022, and 2024.

The distributed design carries operational implications that differ from a managed service. An organization running its own Outline server takes on responsibility for that server's availability and maintenance, obligations a centralized provider would otherwise handle.

Jigsaw designed the tool to reduce the technical barrier to deployment, not to eliminate the management responsibility entirely. The architecture reflects the underlying goal: resilience against network-level blocking, at the cost of additional operational overhead on the part of the deploying organization.

The combination of open-source code and independent security audits that Jigsaw has applied to Outline reflects a particular approach to establishing technical trust. Open-source publication and third-party review allow outside parties to examine how the software behaves, not only how it is described. Published audit records provide a documented history of that review.

Different tools in Jigsaw's portfolio apply different levels of public documentation, and the degree of outside scrutiny available to researchers and journalists varies accordingly.

Automated Scoring and the Limits of Training Data


Content moderation at scale presents a different category of problem than access or availability. A major online platform or news publication may receive millions of user-submitted comments per day, and the volume of potentially harmful content within that total exceeds what review teams can examine individually.

The practical consequence is that harmful content either accumulates unreviewed, or platforms apply automated filtering rules that lack the contextual sensitivity of case-by-case human judgment.

Perspective API is a machine-learning tool designed to assist human moderators by scoring submitted text for probable toxicity. The system returns a probability score for each piece of text, representing the likelihood that a human reviewer would perceive it as harmful.

Paul Friedl, writing in Law, Innovation and Technology in 2023, describes Perspective API as a machine-learning content moderation system in which toxicity was defined as content likely to make people leave a discussion.

The design positions Perspective as a triage tool rather than a decision-making system. Publishers and platform operators set their own score thresholds and apply their own editorial standards to content the system flags.

Perspective does not determine whether a comment should be removed; it identifies which comments are statistically more likely to require attention, allowing moderation teams to direct their review capacity toward the highest-risk submissions first.

Friedl situates Perspective within a broader category of algorithmic normative systems, drawing a comparison to how legal rules also encode social judgments into repeatable procedures. Both types of systems face questions about whose judgments they encode and how they handle cases that fall outside the distribution their designers anticipated.

For content moderation specifically, the challenge is that assessments of what constitutes harmful speech vary across communities, languages, and legal contexts. A model trained on one population's judgments may not generalize reliably to others.

Any machine-learning model trained on human-labeled data reflects the judgments of its labelers, including whatever inconsistencies those judgments contain. For a content moderation system deployed at meaningful scale, the gap between what the model flags and what a given community considers harmful can be substantial.

Friedl's analysis identifies this as a central challenge in evaluating systems like Perspective: the model's behavior cannot be assessed independently of how its training data was collected, labeled, and weighted.

Intervening Before Content Is Encountered


The Redirect Method operates at a different point in the information process. Rather than reviewing content after it has been submitted, it intervenes in user search behavior before content is encountered. When a user searches for terms associated with violent extremist content, the method places advertisements or links in the results that direct the user toward counter-narrative material.

The extremist content itself is not removed; an alternative is made available alongside it within the same search interface.

A 2018 evaluation by the RAND Corporation examined Redirect Method campaigns targeting searches related to violent jihadist and violent far-right content. RAND found click-through rates of approximately 3.19 percent for jihadist-related searches and 2.22 percent for far-right searches, performance that RAND found comparable to commercial search advertising benchmarks.

The evaluation covered behavior at the point of search; RAND noted that click-through rates measure a discrete user action and do not establish whether that action was followed by sustained engagement with counter-narrative content or any subsequent change in attitude.

The advertising infrastructure underlying the Redirect Method is the same targeting infrastructure used in commercial digital advertising, which matches placements to users based on their active search queries. In commercial applications, this mechanism directs users toward products or services. In the Redirect application, it directs users who have searched for extremist content toward alternative material.

For a method intended to address online radicalization, that distinction is significant. It illustrates how early evaluation methods in this field remain relative to the scale of the problems practitioners are attempting to address.

Publication, Audits, and What Outside Review Can Establish


Jigsaw presents itself as a research and development group that publishes its tools and findings for external use. Its main site at Jigsaw lists active projects and background materials, and products such as Outline make their audit records directly available.

Those practices allow independent researchers and journalists to examine specific technical claims, though they do not provide visibility into how the tools are configured and evaluated across individual deployments by third-party organizations.

The four projects described here address separate failure points in how information moves and how access is maintained under adversarial conditions. The available evidence on each shows measurable performance at the technical layer: traffic filtering, server deployment, comment scoring, and search advertising placement.

What happens beyond that layer, including how protected access is used, how moderation affects community dynamics, and how exposure to counter-narrative content affects radicalization outcomes, remains a set of open questions that the tools themselves do not resolve.

The gap between technical performance and social outcome is not specific to Jigsaw, but it sets the terms for how the organization's work should be assessed as these tools continue to develop and scale.

Sources


Article Credits