Inside Pawtocol: Business Data Request Content
One of the most unique innovations of the Pawtocol platform is the Data Marketplace, a first of its kind, peer-to-peer exchange for our users to safely sell their de-identified data directly to companies and groups that want to buy it. For users, this means getting paid when someone wants to use their data. For buyers, it means richer datasets, with cheaper price tags.
The Data Marketplace will function much like a stock market. The sellers are the platform users and the buyers could range from a local start-up pet store, with the goal of better understanding the market, to the FDA, trying to determine if grain-free food really does harm your dog’s heart. Since our model dramatically reduces the overhead costs associated with collecting, verifying and storing data, these savings can be passed on to data buyers. More high-quality data being available to more buyers, means more problem solving, more competition and more innovation.
For data creators (the platform’s users) the Data Marketplace represents an opportunity to recapture a portion of the value their data represents to the tech companies that have used it to generate record profits. For data consumers (like researchers, businesses or advertisers), the Data Marketplace represents a revolutionary step forward in the availability, accuracy and variety of data pertaining to the pet industry.
Pawtocol believes this to be a core component of the first sustainable business model for modern technology companies because the costs and benefits of data are aligned more accurately across creators and consumers. Perhaps a better way to say it is, we’re building a tech company that pays fairly for the digital resources it consumes.
Here is a summary of the most important takeaways from the following discussion:
- The data available for sale can potentially include any platform activity, covering areas such as consumer purchasing, veterinary medicine and treatments, location information, pet or pet owner demographics, recreational activities, and a variety of other classes of information.
- The marketplace will function like a stock market, with data buyers offering to buy specific data at a set price.
- This model will likely lead to cheaper, richer datasets than are available for purchase from traditional data aggregators
- Pawtocol’s innovative design features allow data to be exchanged while maintaining an individual’s privacy as well as a high-degree of trust in the data
What data will be available?
In short, any information captured within the platform and consented for sharing by a user can be made available for purchase. We make no effort to anticipate the myriad end-uses of such rich datasets. Instead, we provide sufficient controls and transparency to safely exchange only such information that is necessary to meet the buyer’s need, with the price offered serving as a reward proportional to the risk perceived by the seller.
All that said, we can conceive of several exciting uses of the data available in the marketplace:
- More responsive, inclusive and cost-effective medical research. The current grain-free debate is an excellent use-case for Pawtocol’s Data Marketplace. Data creators can easily track, and ultimately share, their pet’s feeding and exercise regimens, along with medical records. Curating a research dataset to adequately model such a complex system as diet across dog breeds is just simply too resource-intensive (time and money) to be feasible; this is evidenced by the small size of even the most robust available studies on the topic. Identifying subjects, collecting inputs, verifying those inputs and collating them involves two categories of expense: the first is payment to study subjects for participation and the second is payment for administration of the process, with the latter often consuming over 90% of the total cost. The Pawtocol platform eliminates the need for nearly all of these administrative efforts. This dramatically reduces the time and money needed to prepare a study such as this, allowing for larger, more varied sample sizes, which will ultimately accelerate vital research.
- More flexible, precise and accessible market research. When a pet food manufacturer examines the potential to enter a new product or geographic market, they are often limited to focus groups, aggregated sales data and relying upon broad assumptions. This leads to suboptimal decisions and lost efficiency. Performing the necessary research to enter a new market is therefore often cost-prohibitive for smaller businesses, serving as a barrier to entry for potential new entrants. In the Data Marketplace, a business could simply offer a small payment in exchange for consumers to reveal their relevant purchasing preferences. This will yield benefits to consumers in the form of a greater competition and, in turn, greater breadth of products to choose from.
- More timely, insightful and accurate supply-chain analysis. Managing ingredient sourcing, manufacturing and delivery processes has become ever more challenging as cost control pressures meet enhanced market competition and longer delivery routes. One of the most serious consequences of this is the growing prevalence of potentially lethal food recalls. With the help of Pawtocol’s Spectrophotometry scanner and the data marketplace, food manufacturers and retailers can directly measure variances in the quality of the products at the point of consumption. These businesses can incentivize consumers of their product to measure for the presence of potentially dangerous substances, such as mold, through the data marketplace. This will lead to faster recognition of problems and even the ability for manufacturers to identify a potential issue before it becomes more serious.
How will it work?
The Pawtocol Data Marketplace is just that, a marketplace. While the user interfaces will look substantially different, the best analogue for envisioning the underlying process is the stock market. At its most basic level, for a stock market to function only two things are needed:
- A certain amount of information about the security that is knowable by both buyers and sellers
- A way to publicize the price a buyer or seller would accept for that security.
A potential data buyer would begin by crafting his or her request, which means entering the necessary information that a seller would need to assess the price they feel is fair. This will include a description of the precise data that would be exchanged, the time a seller has to decide to accept or reject the offer, and the price being offered. The latter two components are relatively straightforward, but it’s possible that the content of the data being bid on could be complex or the potential risks obscure. For this reason, the platform will offer a “score” intended to distill concerns such as the potential for the information to be used to uniquely identify a person into a single intuitive value. This will have the effect of encouraging “liquidity” in the marketplace by making each request for data more understandable at a glance..
So, just as a buyer of a security places a bid for that security, which is potentially matched with a seller willing to pay that price, the buyer of a data record will submit their request for data to the marketplace and, when a seller believes the price offered to be fair, an exchange will occur.
Up to this point, we’ve focused on how the marketplace will work from a functional perspective, but an adjacent question is, how will this sharing be possible without violating privacy? The answer to that is a concept vital to the platform’s function in general: selective disclosure. In practical terms, this means that a request for specific data can be fulfilled without providing any more data than is expressly requested. Take the following, albeit oversimplified, example: if a request for a dog’s breed required a complete adoption record to be provided, it’s possible that other information contained in the record besides the dog’s breed could reveal identifiable information about the dog’s owner. Perhaps the breed is rare or adoption location remote, making it possible to identify the adopter with just a bit of detective work. Scaling this up to more sensitive classes of information further emphasizes how important the concept of selective disclosure is to enabling an open marketplace where data can be exchanged freely between individuals without grave privacy concerns.
On the other hand, for the exchange to function despite the concept of selective disclosure, there must be a clear provenance with which the data can be proven accurate. This will be discussed in detail below.
How much will it cost?
The Pawtocol Data Marketplace is the world’s first price discovery marketplace for pet data. As a result, there is very little information with which useful forecasts can be constructed. Rather than speculate on nominal price levels, we can describe prices on a relative basis, as well as through the factors expected to influence prices up or down.
Let’s begin by reviewing the options available for data buyers today. Generally speaking, only two methods exist: collect it on your own or purchase a curated dataset from an aggregator.
Costs for collecting de-identified, transaction-level data range widely based on the specific approach one takes. In some cases, users can be incentivized to share the data in exchange for receiving a product or service that they value. Great examples of this are navigational apps like Google Maps or Waze, wherein the app developer invests considerable resources to develop their software and are then rewarded with data that they can use to generate revenue in a variety of ways. Once the data have been collected, they also incur a cost for storage.
Building off of the above, purchasing curated datasets from an aggregator is also often extremely expensive because they too incur costs associated with collecting and storing it, as well as verifying its accuracy. What’s worse, the market structure itself of this industry sector lends itself to high prices because there are only a small number of major data aggregators in operation.
In comparing this to Pawtocol’s Data Marketplace, there are obvious structural differences that will lead to far lower prices, all else equal.
First, there is little or no direct cost associated with storing the data for the purpose of future sale. This is so for two reasons: because the distributed encrypted storage system used by the app doesn’t rely upon expensive centralized servers and because the data that will ultimately be sold in the marketplace are the same as that which the Pawtocol app itself is using to perform its primary functions.
Second, as you will read below, verification and quality control functions are performed natively within the platform itself and the distributed storage system is immutable, meaning there are no secondary costs associated with performing explicit quality control procedures. The data are verified upon creation by various functions within the application, and once they are stored, they cannot be modified.
Armed with this knowledge, if we compare traditional purchasing options to the approach employed by Pawtocol, it should become immediately clear how this will lead to lower costs on a per unit (i.e., data record) basis. Expensive centralized storage arrays and labor-intensive collection routines are essentially eliminated. In their place, we have simply inserted a small payment to be made directly to the creators of the data, making for a far healthier ecosystem all around.
Let us now move on to briefly discuss the pricing mechanism itself: the forces that will move prices up and down, again, all else equal. As with any marketplace, there are buyers and sellers, each with their own perceived value of the good or service in question. A sale only occurs when a buyer and seller agree upon the fair value of the product. This simple construct yields great insight into pricing mechanisms because we are all creators of data in some aspect of our lives and have therefore cultivated a set of preferences around intrusiveness, generation costs and the expected worth of the data’s end use. For example, highly detailed medical data is intrusive (because medical information is seen as sensitive), often costly to generate (because treatments and procedures can be expensive) and can potentially yield significant profit for the buyer of the data (because medical treatments and devices can carry high price tags). It’s likely that data of this class would be more expensive than say data describing a dog walkers typical walking route (it’s less intrusive, less costly and likely less lucrative to the end buyer). Simply put, much of the underlying pricing mechanism will be surprisingly intuitive to both buyers and sellers.
How can I know the data is trustworthy?
This segment of the discussion is by far the most inherently technical, so for the sake of accessibility, certain aspects will simply be glossed over.
Generally speaking, there are three ways trustworthiness is conferred upon the data being offered for sale by users:
- Inference drawn by the buyer regarding the collection method
- The immutability of the blockchain and distributed storage systems themselves
- The ability to prove membership in a certain group in order to characterize the origin of the information
These have been listed in ascending order with respect to complexity, so let’s begin with the first point. When a data request is constructed, requirements for how the data were collected can be included. This is one of the most obvious ways in which trustworthiness of the purchased data can be assessed by a potential data buyer. For example, data measuring the heart rate of a pet that has been self-declared by an owner would be less trustworthy than these same data collected by a licensed veterinarian during an exam. However, it’s possible that if the data were to be collected by a biometric monitoring device that has been integrated with the platform, that method could be perceived as even more trustworthy than a vet collecting it
Moving on to the second statement, one of the most compelling features of blockchain technology is the fact that the data it contains cannot be changed. This is what’s meant by immutable. A complete explanation of precisely how this is possible involves knowledge of cryptography, but the following analogy can suffice for most. Consider a series of three safes, that are nested within one another, each locked by a combination, where the successful combination is complex (e.g, far more complex than the padlock on your high school locker). So the first safe is the largest and, if opened, one would find a slightly smaller, second safe. Within the second safe is the final and smallest of the three safes. Inside the third safe there is an object of great value. You do not know the combination for any one of the safes. With enough time and persistence, it’s possible that even someone with no prior experience with safe-cracking could open the first safe but this would likely take months or even years of effort to check all possible combinations until the correct one is found. With the first safe cracked, you would then need to repeat the process for the second and third safes in order to retrieve the object. However, doing so would likely take longer than the length of many human lifespans. Blockchain technology relies on a similar construct, except there are millions of safes and each individual combination is practically impossible to guess. Cracking all of these would take longer than the universe has been in existence. This is what makes data stored on a blockchain effectively impossible to change.
And finally, we come to the most technical design component pertaining to the trustworthiness of the data for sale in the marketplace: the group that a person collecting or verifying the data belongs to confers varying degrees of trust upon the data. Similar to the immutability discussion above, a complete understanding of this requires knowledge of set theory and cryptography. However, in this case, an informative yet technically accurate analogy isn’t readily available. As such, certain assumptions must be accepted, namely:
- A robust process exists for a person to attain membership in a group
- Data can be proven to have either come from or been confirmed by a member of such a group
If we accept those two statements as fact, the remainder of the discussion is actually quite intuitive. We apply similar logic constantly in our everyday lives. It’s generally thought of as “credibility”. For example, you’ll likely place more trust in a review for a new restaurant that was written by a well-known food critic, or even a friend you know to be knowledgeable in this field, than a review written by an ordinary restaurant-goer that you have no prior information about. This is simply because the writer can prove membership in a group, you know that members of that group possess certain characteristics, and you know the content you’re reading came from that writer. The Pawtocol platform relies upon complex mathematics and information theory to implement these intuitive principles as yet another means of understanding the reliability of data purchased in the Data Marketplace.