IP geolocation is useful, but it is not GPS
IP geolocation tries to infer a real-world location from an IP address. That is very different from GPS, which is measured by a device. An IP address is primarily a routing identifier assigned and reassigned by networks, not a permanent marker glued to a home or a person.
Most IP geolocation systems are built from multiple data sources, each with blind spots: regional internet registry data, ISP information, routing and topology signals, latency measurements, user-contributed corrections, Wi-Fi and mobile network signals, commercial datasets, and sometimes “ground truth” harvested from applications where users share location. The result is a best-effort estimate that is often very good at the country level, frequently imperfect at the city level, and sometimes wrong in ways that look bizarre to end users.
A widely cited benchmark from some major commercial providers illustrates this pattern: they report about 99.8% accuracy at the country level, and substantially lower accuracy at finer levels, for example, city-level estimates in the US on the order of two-thirds within a 50 km radius.
What “accuracy” means depends on the level you care about
When someone says “IP geolocation is accurate,” the first question should be: accurate for what granularity?
1. Country level
Often very high because country assignment aligns with address allocation, regulatory boundaries, and how many providers structure their networks. Many databases claim very high country-level precision, and research literature frequently finds strong performance at this level.
2. Region or state level
Usually decent in many countries, but it varies by market structure and the availability of reference data.
3. City level
This is where the wheels often come off. City-level accuracy can be good in dense urban areas with rich signals, but poor in rural areas, small towns, or areas with limited mobile network coverage, or where traffic is backhauled to a distant hub. Academic studies document substantial variability across databases and contexts.
4. Street address level
If a service implies this from an IP address alone, treat it as marketing, not measurement. Even industry guidance generally frames geolocation as coarse and probabilistic rather than pinpoint.
Also note that “accurate within 50 km” and “accurate to the correct city name” are not the same claim. Some vendors use an “accuracy radius” concept, where the predicted point is accompanied by a radius that’s meant to capture uncertainty.
Why two IP geolocation tools can give different answers
If you type the same IP into multiple lookup sites, you may get different cities, different ISPs, or different proxy flags. That does not necessarily mean one is lying. It usually means they are using different inputs, different update cycles, and different rules for ambiguous cases.
Here are the most common drivers of disagreement.
1. The IP does not represent the user’s physical location
Many users are “somewhere else” from the network’s point of presence.
- Mobile networks: Mobile carriers often route traffic through a small number of gateways. Your phone may be in one city, while the carrier’s public egress IP maps to another.
- Corporate networks: Employees may appear to be located at HQ or a central data center due to VPNs and shared egress.
- Cloud and hosting: Traffic can exit from a cloud region that is not the end user’s physical location.
Even the same provider can have multiple plausible “locations” for an IP: where it is registered, where the network equipment is, and where end users usually appear to be. Some datasets explicitly distinguish these concepts (for example “registered country” versus “user location” style interpretations).
2. IP allocation and reassignment happens constantly
IP blocks move between organizations. ISPs acquire other ISPs. Enterprises renumber. IPv4 leasing and transfers exist. Mobile carriers rebalance address pools. If a database updates weekly and another updates monthly, they can disagree for weeks after a reassignment.
Research on database stability highlights that geolocation mappings can change over time and that reproducibility can suffer if you do not control for database version and timestamp.
3. Some “locations” are defaults, not discoveries
When a database lacks confident evidence, it may fall back to a default such as:
- the ISP’s headquarters,
- the centroid of a region,
- the largest city in a service area,
- or the location of a network operations center.
This is one reason people see repeated odd outcomes like “everyone is in the same city” for a particular provider or country. Classic research papers on geolocation databases discuss these artifacts and large cross-vendor differences.
4. Different vendors use different evidence and weighting
Two databases can both be “reasonable” but choose different tradeoffs:
- prioritize freshness vs. conservatism,
- prefer user-submitted corrections vs. curated sources,
- infer location from latency and topology more aggressively vs. more cautiously,
- treat mobile and enterprise ranges differently.
So you can see Tool A pick a precise city guess, while Tool B chooses a broader metro-area location because it is less confident.
5. Self-published geofeeds are improving the ecosystem, but they are not magic
To reduce ambiguity, network operators can publish coarse geolocation mappings for their prefixes using a standard “geofeed” format (RFC 8805). The format is intentionally coarse and is designed for operational use, not for tracking individuals.
Guidance for discovering and using geofeed data has evolved in the standards world (RFC 9092 and its successor RFC 9632), including operational and privacy considerations.
The key point: if one database consumes a network’s geofeed and another does not, they can diverge, and the one using geofeeds may be more correct for that network. Recent operator commentary also notes limitations, like missing timestamps and confidence indicators.
Why accuracy claims feel inconsistent in practice
You might see a vendor claim “city accuracy of X%,” yet your personal test fails repeatedly. Three reasons:
- Your sample is biased. If you test mostly VPNs, mobile IPs, enterprise networks, or cloud egress, you are testing the hardest cases.
- Metrics vary. A “hit within 50 km” is not the same as “right city label,” and neither equals “right neighborhood.”
- Ground truth is hard. Even researchers struggle to build high-quality ground-truth datasets at scale without introducing selection bias. That is why academic work spends so much time on methodology and evaluation design.
A practical way to compare two tools when they disagree
If Tool A says “City A” and Tool B says “City B,” you can treat the disagreement as a measurement problem rather than an argument.
1. Classify the IP context
- Is it a mobile carrier, cloud provider, corporate network, or known VPN range?
- If yes, expect city-level disagreements.
2. Compare at multiple levels
Do both agree on the country? Region/state? ISP or ASN?
If they agree at higher levels but not city, that often means the city inference is the weak link.
3.Quantify how different the results are
When tools provide numeric fields (accuracy rate, radius, or error distance), comparing them is easier than comparing city names.
For example, if one provider claims 66% city accuracy (within 50 km) and another claims 55% on a similar benchmark, you can quantify the relative gap using a percentage difference calculator.
That turns “they feel different” into “their claimed city-level accuracy differs by about X%,” which is much easier to communicate to non-technical stakeholders.
4. Interpret results probabilistically
Instead of asking “which one is correct,” ask “what is the probability this is correct at the level I need?” If a database says its city-level accuracy is around two-thirds for a certain context, that is a reminder that you should expect wrong-city outcomes regularly, even when nothing is “broken.”
If you want to sanity-check how likely a correct hit is over repeated lookups (for example across a batch of users), a probability calculator can help you model expected outcomes.
When you can trust IP geolocation most
IP geolocation is typically most reliable when:
- you only need country (or sometimes region),
- the IP belongs to fixed broadband with stable assignment patterns,
- the database has recent updates for that ISP and geography,
- and you treat the output as approximate, not as a precise address.
When you should be cautious
Be cautious when:
- the IP is from mobile carriers or CGNAT-heavy environments,
- the user is likely behind a VPN, proxy, or corporate network,
- you need city-level precision for compliance, taxation, or fraud decisions,
- you are using geolocation for identity or attribution rather than for coarse routing or content localization.
Standards discussions around geofeeds explicitly call out privacy risks and the need for caution because publishing and consuming location mappings can expose sensitive inferences.
Bottom line
IP geolocation is best understood as a probabilistic estimate produced from imperfect, fast-changing network signals. Different tools disagree because they ingest different data, update on different schedules, handle ambiguity differently, and sometimes interpret “location” as different things (registered location vs. end-user location vs. network egress).
If you treat IP geolocation as a coarse signal, validate it against context (mobile, VPN, enterprise, cloud), and quantify differences instead of debating them, you can use it effectively without expecting precision it cannot reliably deliver.
Featured Image generated by Google Gemini.
Share this post
Leave a comment
All comments are moderated. Spammy and bot submitted comments are deleted. Please submit the comments that are helpful to others, and we'll approve your comments. A comment that includes outbound link will only be approved if the content is relevant to the topic, and has some value to our readers.

Comments (0)
No comment