ANALYSIS

The OSINT Ethics Spectrum: When Does a Tool Become a Weapon?

The tools are the same. The targets are different. And the ethics conversation in most OSINT communities has not kept pace with either the capability or the abuse.

This article is not neutral on the subject. There are uses of OSINT that are clearly ethical. There are uses that are clearly questionable. And there is a wide contested grey zone in between — one that serious investigators, psychologists, journalists, and lawyers have developed frameworks to navigate. This article maps all three zones, tool by tool, feature by feature, and gives you the frameworks to classify cases the article does not explicitly cover.

Most people approach OSINT ethics the same way they approach traffic laws: if you have not broken one, you are fine. That framing is dangerously incomplete.

The majority of problematic OSINT activity is entirely legal. Pulling a private individual's home address from a people-search site is legal in most jurisdictions. Cross-referencing their face across social media platforms is legal. Running an automated script that monitors their follower count, their bio changes, when they post, and what media they upload — all legal. Combining all of this into a profile and handing it to someone who has a restraining order against them — still not a crime until something happens.

The law is not the ethics floor. It is a different instrument measuring a different thing. Professional fields that handle sensitive information about individuals — psychology, investigative journalism, licensed private investigation — have each developed ethics codes that operate well above the legal minimum. None of them were written specifically for OSINT practitioners. All of them apply directly.

The Three Frameworks

Framework 1: Psychology — The APA Standard

The American Psychological Association's Ethics Code is built on five principles: Beneficence (actively benefit those affected), Nonmaleficence (take care to do no harm), Fidelity (maintain relationships of trust), Justice (fair and equal treatment), and Respect for People's Rights and Dignity — specifically including the right to privacy, confidentiality, and self-determination.

Standard 4.04 — Minimizing Intrusions on Privacy

The most transferable principle to OSINT: informed consent is a constraint, not a technicality. When you are building a profile on a private individual, they have not consented to your analysis. The absence of consent shapes every methodological choice that follows.

Framework 2: Private Investigation — Legal Access Is Not Ethical Use

Licensed private investigators across Europe and the US operate under frameworks that explicitly separate lawful access from ethical use. The core obligation: a PI has a duty to their client, but also an independent duty not to use lawful methods in ways that cause harm beyond what the investigation's legitimate purpose requires.

A PI can legally access a court record showing someone's address. Using that access to locate a domestic abuse survivor for her abusive former partner is a legal act and an ethics violation — and in many jurisdictions a criminal facilitation. The ethics standard requires investigators to verify the purpose of a request, not only the legality of the method. Two PI ethics principles transfer directly:

  • Proportionality — investigation depth must match the legitimate interest. Investigating a CEO for documented fraud justifies significant research. Investigating a private individual because a client is curious about their whereabouts justifies nothing.
  • Purpose verification — before any investigation involving a private individual, establish and document who the client is, what the stated reason is, and what the plausible lawful basis is. If these cannot be answered, the investigation should not begin.

Framework 3: OSINT Journalism — The Bellingcat and Berkeley Protocol Standards

Bellingcat's published ethics framework, the Berkeley Protocol on Digital Open Source Investigations, and the Stanley Centre's Gray Spectrum framework collectively represent the most OSINT-specific ethics guidance available. They share four core principles:

  • Source transparency — every finding must be traceable to its origin and that origin must be shared so methodology can be independently verified.
  • Proportionality to public interest — the depth of investigation must be justified by the scale of public interest. Documenting war crimes justifies extensive investigation of participants. Documenting a private individual's daily schedule justifies nothing.
  • Do not amplify harm — sharing information that puts a private individual at heightened physical or reputational risk, disproportionate to any public interest served, is itself an ethics violation. Bellingcat has explicitly withheld full geolocation data when publishing it would endanger the person who filmed evidence.
  • Humility about tools — no single tool's output is conclusive. Facial recognition produces false positives. Satellite imagery requires specialist interpretation. A tool result is a data point, not a conclusion.

What Is Questionable — and Why

The following uses fall outside all three frameworks. They may be entirely legal. They may be carried out with a tool whose other features are entirely legitimate. That does not make them appropriate. Where lawful authority — law enforcement acting under a warrant, a licensed investigator with documented legal mandate — shifts the calculus, that is noted. For everyone else, these are the areas where serious ethical questions arise and where the burden of justification is highest.

Real-time location tracking of private individuals

Determining where a private person is physically located in real time, without their knowledge or consent, is surveillance — not intelligence. Tools that aggregate geo-tags from recent social media posts to estimate current location, that extract GPS coordinates from image metadata, or that monitor check-ins to build a movement timeline are being used for this purpose routinely. The APA standard fails on both minimisation and consent. The PI standard fails on proportionality. The Bellingcat standard fails on public interest. Unless you are law enforcement with a warrant or a licensed professional with a documented lawful mandate, real-time location tracking of a private individual belongs in this category.

GHunt draws location intelligence directly from Google accounts — not from stated location fields, but from Maps reviews (which reveal the restaurants, hospitals, and neighbourhoods a person frequents), Photos metadata with embedded GPS coordinates, and Calendar events when left at public defaults. A subject's Maps review history alone can reconstruct months of movement. Shodan, designed for infrastructure research, can locate home routers, personal NAS devices, and home security cameras when pointed at a known residential IP range rather than an organisation's network. The tool is passive — the ethics shift comes entirely from the purpose and the target.

Accessing non-public data

If data requires authentication to access, it is not public. Scraping a private Facebook profile through a fake account, extracting data from a private Telegram group through deceptive membership, or accessing private communications using leaked credentials — none of these are OSINT. They are unauthorised access to private systems, dressed in OSINT vocabulary. The fact that a tool automates this access does not change its nature.

SpiderFoot in active mode sits on this boundary. Where passive mode queries existing public databases — DNS, WHOIS, certificate transparency logs, breach feeds — active mode directly contacts target systems through port scanning, banner grabbing, and web crawling. Those actions leave traces on the target's systems. Active reconnaissance of systems you do not own or have explicit authorisation to test is not protected as open source research, and in most jurisdictions is not protected as research at all.

Active surveillance loops on named private individuals

IG-Detective v2.0.0 includes a feature its own documentation describes as: "Active Surveillance (surveillance): Lock onto a target and run a background SQLite loop. Get live terminal alerts for precise follower changes, new media, and silent bio edits."

Read those words carefully: lock onto a target, silent bio edits. This is persistent, automated, covert monitoring of a specific named person's micro-behaviours on a social platform, running indefinitely without their knowledge. It fails the APA minimisation standard. It fails the PI proportionality standard. It fails the Bellingcat public interest standard. The language is not the language of research — it is the language of targeting. There is a narrow legitimate use case: tracking a public official's coordinated disinformation campaign, with oversight, as part of a documented public interest investigation. For every other use on a private individual, this feature is ethically unjustifiable. The UK's National Stalking Helpline consistently identifies digital monitoring tools as the primary mechanism enabling harassment campaigns. IG-Detective's surveillance mode is, functionally, open-source stalkerware.

Building profiles to enable doxxing or targeting

Maltego transforms that map the personal network of a private individual — family relationships, social connections, geographic data, infrastructure links — produce exactly this kind of document. The graph answers the question its output is designed to answer: who does this person trust, where do they live, and how can they be reached? Running those transforms against a private individual without a documented legitimate purpose fails every framework. Recon-ng's reporting module, which generates formatted intelligence dossiers, raises the same concern: a formatted dossier on a named private individual implies intent. If you are producing one, the purpose it will serve must be documented before the report is generated, not rationalised afterward.

Aggregating identity profiles without documented purpose

Most OSINT tools are genuinely dual-use. The same capability that makes them useful for fraud investigation, security research, or self-auditing makes them dangerous when the same query is run without a legitimate, documentable purpose. The aggregation risk is where this becomes acute.

MOSINT takes a single email address and returns breach exposure, linked social media profiles across multiple platforms, associated phone numbers, domain registrations, and physical address data. OSINT Industries queries 1,500+ sources to produce a timeline of online activity and account linkages going back years. Sherlock searches a username across 300+ platforms simultaneously — and crucially, can locate pseudonymous accounts a private individual created under a different name deliberately, to separate an identity for safety or personal reasons. People build pseudonymous identities to escape harassment, to speak freely in contexts that would endanger them under their real name, to maintain privacy. Sherlock can undo that separation in seconds.

Each of these tools is capable of producing a comprehensive identity profile from a single starting point. The APA minimisation standard is direct: if you do not need every data category the tool returns to serve the stated purpose, collecting it is a violation of proportionality. Running any of them against a private individual without a specific, documented purpose proportionate to that level of intrusion is not investigation. It is aggregation for its own sake, and the profile it produces can cause real harm regardless of whether that harm was the stated intent.

Investigating minors as private subjects

No professional framework permits the investigation of private individuals who are minors outside of law enforcement with appropriate oversight. The APA standard is explicit: persons with reduced capacity for autonomous decision-making require heightened protection. Minors cannot consent to investigation. They cannot advocate for their own privacy rights in the same capacity as adults. This category is not grey.

What Is Allowed

Investigating public figures in their public capacity

Politicians, executives, public officials, and others who exercise public power have a reduced reasonable expectation of privacy with respect to that power. Investigating a CEO's corporate relationships, a politician's financial disclosures, or a public official's documented conduct in their official role is within scope under all three frameworks. The Berkeley Protocol frames it as proportionality: the power exercised determines the legitimacy of the scrutiny. The limits are real — a public figure's minor children are private individuals; a public figure's medical history is relevant only when it directly affects their public duties.

Maltego is well suited here. Mapping corporate ownership structures, tracing the relationships between a public official and private entities with interests in their decisions, or visualising the organisational network around a documented fraud — these are proportionate uses when the central subject is someone exercising public power. Google Dorks, which use advanced search operators to surface specific indexed content, are the most transparent OSINT method available: every query can be replicated by the subject themselves, and the index reflects only what the subject or their associated entities made publicly available.

Defensive security research on infrastructure

Shodan indexes internet-connected devices and exposed services — its primary use is finding misconfigured infrastructure, exposed databases, and vulnerable devices before attackers do. theHarvester collects email addresses, subdomains, and employee names from public sources including search engines and certificate transparency logs, giving security teams a clear picture of their own external exposure. SpiderFoot in passive mode queries existing public databases — DNS records, WHOIS data, certificate transparency logs, breach feeds — without touching the target. Recon-ng's domain and DNS modules provide the same capability with modular control over scope: activate only the modules relevant to the stated question, stop when it is answered.

Journalism investigating documented wrongdoing

A journalist investigating documented human rights violations, organised crime, corporate fraud, or public sector corruption has a clear public interest basis for investigative OSINT. The Bellingcat standard applies throughout: transparent methodology, archived evidence, harm assessment before publication, and explicit acknowledgment of uncertainty. Recon-ng's contact enumeration modules are legitimate here when the target is the public figures and entities at the centre of a documented accountability investigation. Maltego's relationship mapping serves the same purpose when tracing the network of connections between entities implicated in wrongdoing.

Auditing your own digital footprint

Running OSINT tools against your own presence is the use case they were designed for. Sherlock maps every platform where your username appears across 300+ sites, including accounts you may have created and forgotten. MOSINT applied to your own email returns breach exposure, linked social profiles, and registration data — a diagnostic rather than an intrusion. GHunt on your own Google account reveals the location and activity data you are passively broadcasting through Maps reviews, Calendar settings, and Photos metadata — much of it data you likely did not know was queryable. OSINT Industries across 1,500+ sources produces a breadth of output that often surfaces exposure no single tool would show. No consent question applies when you are the subject. What that footprint actually looks like is covered here.

The Grey Zone — How the Frameworks Classify It

The grey zone is large and most real-world OSINT work operates within it. Context determines ethics, and context is almost always complex. The three frameworks converge on the same practical tests for evaluating grey zone cases.

Private individuals with stated legitimate purpose

A fraud victim wants to identify who defrauded them. A company wants to vet a prospective employee. A journalist wants to verify a source's claimed identity. Each is a real, legitimate purpose that may justify limited OSINT investigation of a private individual. What makes each grey rather than clearly allowed is proportionality and verification: can you document a purpose that would withstand scrutiny? If yes, proceed — with method choices constrained to what is genuinely necessary for that stated purpose. If no, or not yet, it should not proceed.

In fraud investigations, MOSINT and OSINT Industries become grey zone tools rather than clearly questionable ones — but only when the purpose is specific and documented, and only when the output is used in proportion to that purpose. MOSINT returns everything it can find from an email address. OSINT Industries aggregates across 1,500 sources. Using either in a legitimate fraud case does not justify collecting every data point returned. The investigator decides what the stated purpose actually requires, and stops there. That discipline is what separates grey from questionable in this context.

Cross-platform account linking

Sherlock, MOSINT, and OSINT Industries can all cross-reference individuals across platforms — linking usernames, email addresses, and profile images to confirm that multiple accounts belong to the same person. For a cybersecurity analyst tracking a threat actor's infrastructure, this is essential investigative technique. For a journalist confirming a public figure's use of anonymous accounts to spread disinformation, it is accountability journalism. For someone locating an ex-partner who deleted their main accounts and created new ones to escape contact, it is the same technical action in a context that fails every framework.

Same method, three entirely different ethical positions. The APA minimisation standard is the test: does the cross-platform linking serve the stated purpose, or does it go beyond it? The ethical discipline is the same regardless of which tool is used: document the purpose before the search, confine the output to what that purpose requires, and stop when the stated question is answered.

Geolocation from public posts

Extracting GPS coordinates from social media metadata, or geolocating footage from publicly posted video, is a legitimate investigative technique. Bellingcat has used it to document war crimes. The Berkeley Protocol frames the ethics as a harm assessment: what is the worst-case outcome if this information is wrong or misused? For conflict documentation, the risk of error is methodological. For a private individual's location, the risk of misuse is physical safety.

Relationship network mapping

Maltego and Recon-ng's aggregation modules construct detailed relationship maps — who knows whom, which organisations are connected, which infrastructure shares registrant data. In fraud investigation and accountability journalism, these maps are essential. The same maps, built around private individuals who are not subjects of documented wrongdoing, create something closer to a targeting package.

The grey zone is the investigation that starts with a legitimate subject — a documented fraud entity, a public official — and through relationship mapping surfaces connected private individuals who are adjacent to the investigation but not themselves implicated. The Bellingcat standard is specific: collect only what is necessary to establish the relevant connection. A director's name is relevant. A director's home address, their partner's social media profiles, and their children's school do not become relevant because Maltego surfaced them as connected nodes. The discipline is in knowing where to stop, and stopping.

The Seven Sins of Bad OSINT

Bellingcat's 2024 framework identified seven recurring failures across the open source research community. Each is a methodology failure and an ethics failure — because the harm from bad OSINT is not hypothetical. Innocent people have been publicly misidentified in terror attacks. Sources have been endangered by careless exposure. War crime documentation has been compromised by chain-of-custody failures.

  1. Not providing the original source. Reposting content without linking its origin prevents verification and spreads unverifiable claims. Without the original source, you cannot be verified and neither can your investigation.
  2. Confirmation bias. Accepting evidence that confirms a conclusion you already hold, and dismissing what contradicts it. Every investigator is susceptible. Naming it explicitly is the first line of defence.
  3. Failing to archive. Online content disappears. If you do not preserve source material at the time of investigation, you may not be able to prove what you found when it matters most. The Internet Archive and archive.today exist for this reason.
  4. Ignoring context. Misinterpreting common events as significant due to unfamiliarity with the domain. Satellite images of controlled burns, private flight paths, and viral footage all require domain expertise before conclusions are drawn.
  5. Misusing tools. Treating tool output as conclusive. Facial recognition produces false positives — particularly across ethnic groups and at scale. A tool result is a data point requiring corroboration, not a conclusion.
  6. Editing source material. Watermarking, trimming, or overlaying audio on original footage in ways that destroy forensic value. Bellingcat has documented cases where edited source material concealed critical audio evidence. If you did not create the content, do not edit it.
  7. Racing to publish. Speed that produces false conclusions is not an asset. When amateur OSINT accounts rushed to identify perpetrators of the Boston Marathon bombing, the Bondi Junction attack, and the Allen, Texas mall shooting, they publicly named innocent people. In each case, verification was skipped in favour of being first. The harm was real and immediate.

Where This Leaves the Practitioner

OSINT tools are not inherently dangerous. The same username hunter that verifies a public official's anonymous disinformation accounts can locate someone who changed their username to escape a stalker. The same email aggregator that reveals fraud can expose a domestic abuse survivor's new address. The same relationship mapper that documents a criminal network can become a targeting document for harassment. The ethics live entirely in the purpose, the subject, and the discipline applied to both.

Before any investigation involving a private individual, answer three questions and write the answers down:

Who is the subject, and what is their relationship to public life? Public figures exercising public power are different subjects from private individuals. That distinction determines appropriate scope.

What is the purpose, and can it survive scrutiny? If it cannot be documented clearly enough to defend, it cannot proceed.

Is the method proportionate to a neutral observer? The friction test: if the subject knew exactly what you were doing and why, would they — or a reasonable third party — consider it proportionate? Active surveillance of micro-behavioural changes on a private person's social media does not pass. Extracting routine location data from a private person's Maps reviews does not pass. A scoped username search on a documented fraud suspect in a fraud investigation does.

The tools in this article are available to anyone. What is not freely available is the discipline to use them correctly. That discipline is what separates investigation from surveillance — and it is a distinction that matters, legally, professionally, and in terms of the concrete harm that can be caused to the people on the other end of the query.

Related Service

The Mirror€595

A full audit of your digital exposure — breach records, data broker listings, social profiles, dark web presence, and more. Delivered in 48 hours.

Get The Mirror — €595 Or Get a Free Exposure Check

Share this briefing

If this was useful, sharing it helps others protect themselves. It also helps keep the intelligence briefings free.