Welcome to the Location Data Blog!

 

Learn how companies all over the globe are utilising location data to enhance business operations and improve profitability. Keep up with industry updates, best practices, and key learnings from location intelligence projects we have executed.

Universal data collection for AI training with Geolancer

Artificial Intelligence (AI) is rapidly transforming industries, from healthcare to finance, signaling a world where AI isn't just an asset, but a necessity. Yet, the backbone of this transformation —high-quality training data — is facing its own challenges. Contextually rich and representative datasets are vital; without them, even sophisticated AI can perpetuate biases, reducing effectiveness and raising ethical concerns. While broad-spectrum models like GPT-4 absorb varied data, specialized ones crave niche, context-intensive datasets. Unfortunately, many data collection methods miss the mark, leaving gaps in representation.  
 
In our latest solution brief, we dive into these challenges and introduce Quadrant’s Geolancer—a platform designed to revolutionize data collection by offering comprehensive, diverse, and high-quality data. 
 


 

Read More

Eliminating bias from AI datasets: The imperative and how Quadrant helps

In the modern world, Artificial Intelligence (AI) is being leveraged across various industries to tackle issues as diverse as inventory management in retail and route optimization in navigation. Due to its immense potential, AI is increasingly being used in pertinent areas such as finance, marketing, and human resources – which raises the question: will the use of AI in these (and other fields) remedy or amplify problems that lend themselves to flawed decision-making? This article will delve into the matter of ‘fairness’ in AI systems, elaborate on real-world instances of AI-based discrimination, discuss existing approaches towards mitigating AI bias, and more.  
 

Read More

Server-to-Server app monetization for data supply chain transparency

The location data industry is characterised by opaque supply chains, making it challenging for buyers to procure high-quality datasets. Quadrant’s mission to instill transparency into the industry starts with how and where we source the data. We have been successfully working on acquiring data right where it is created, incentivizing developers to ethically monetize their mobile apps.

One way of doing this is SDK integration, which we have covered in detail. The other, increasingly popular method is Server-to-Server (S2S) integration; in this piece, we give readers a peek behind the curtain so that they can understand how S2S works.  

Read More

Using AI to clean Personally Identifiable Information from user-generated data sets

Having access to large repositories of data enables businesses to optimise operations in several ways. This includes personalised advertising, greater supply chain efficiency, and more satisfying customer experiences.

However, people have grown increasingly wary of trusting businesses and governments with their data. Several high-profile data privacy breaches at LinkedIn, Alibaba, and Yahoo (to name a few) collectively impacted billions of users.

Ethically managing data and making sure no Personally Identifiable Information (PII) makes it to big data sets is a challenge we take very seriously at Quadrant.

Read More

Performing Extrapolation on Location Data to Derive Relevant Insights

Location data is collected from multiple sources of varying quality GPS signals from mobile devices, beacons, and WIFI connections, the notorious Bidstream, and more. In most cases, even genuine location data cannot represent the entire population of the region. This discrepancy can be attributed to smartphone penetration in the country, app-specific demographic variations, hardware inconsistencies, and sources of location data.

To perform meaningful analysis that accounts for mobility patterns and other trends in a larger region, data scientists use projection models to make an accurate estimation of a region’s population and normalise data counts to fit the use case. This is called data extrapolation.

Read More