Entity Resolution for Big Data

A Summary of the KDD 2013 Tutorial Taught by Dr. Lise Getoor and Dr. Ashwin Machanavajjhala

big_data_er

Image 1: Challenges in Big Data for ER

Entity Resolution is becoming an important discipline in Computer Science and in Big Data, especially with the recent release of Google’s Knowledge Graph and the open Freebase API. Therefore it is exceptionally timely that last week at KDD 2013, Dr. Lise Getoor of the University of Maryland and Dr. Ashwin Machanavajjhala of Duke University will be giving a tutorial on Entity Resolution for Big Data. We were fortunate enough to be invited to attend a run through workshop at the Center for Scientific Computation and Mathematical Modeling at College Park, and wanted to highlight some of the key points for those unable to attend.

What is Entity Resolution?

Entity Resolution is the task of disambiguating manifestations of real world entities in various records or mentions by linking and grouping. For example, there could be different ways of addressing the same person in text, different addresses for businesses, or photos of a particular object. This clearly has many applications, particularly in government and public health data, web search, comparison shopping, law enforcement, and more.

Additionally, as the volume and velocity of data grows, inference across networks and semantic relationships between entities becomes a greater challenge. Entity Resolution can reduce the complexity by proposing canonicalized references to particular entities and deduplicating and linking entities. For instance, consider the following example of a coauthor network from bibliographic data used for InfoVis 2004.

network_resolution

Image 2: ER for Network Simplification

Deduplication significantly reduced the complexity of the network from a ninth order graph to a much simpler fourth order graph, of significantly less size. Dr. Getoor’s work on D-Dupe, an interactive data deduplication tool, demonstrates how an iterative entity resolution process could work for social networks, an increasingly important and growing data set that seems to transcend traditional stovepipes like Twitter and Facebook. Yet another example is the multiport disambiguation of Traceroute output of routers. This dataset, when incorrectly analyzed with no resolved entities, led to the commentary of some technologists about the fragility of the Internet- in fact most Internet backbones are well-defined and hardened with only a handful of actual router entities.

However, there are significant challenges to the ER discipline, least of which is the fact that there is no unified theory and, ironically, ER itself goes by many names! Other challenges like language ambiguity, poor data entry, missing values, changing attributes and formatting, as well as abbreviations and truncation mean that ER is a discipline that includes not just databases and information retrieval, but also natural language processing and machine learning.

Scaling to big data just increases the challenge, as the need for heterogeneity and cross-domain resolution become important features. As a result, ER techniques must be parallel and efficient, in order to be used in the context of Big Data techniques like MapReduce and distributed graph databases.

Tasks in Entity Resolution

Generically speaking, we can discuss ER as follows: there exists in the real world entities, and in the digital world, records and mentions of those entities. The records and mentions may take many forms, but they all refer to only a single real world entity. We can therefore discuss the ER problem as one involving matching record pairs corresponding to the same entity, and as a graph of related records/mentions to related entities.

problem_statement

Image 3: Abstract ER Problem

Deduplication: the task of clustering the records or mentions that correspond to the same entity. There is an intentional variant of this task: to then compute the cluster representative for each entity. This is commonly what we think of when we consider Entity Resolution.

Record Linkage: a slightly different version of the task is to match records from one deduplicated data store to another. This task is proposed in the context of already normalized data, particularly in relational databases. However, in the context of Big Data, a one-to-one comparison of every record is not optimal.

Reference Matching: in this task, we must match noisy records to clean ones in a deduplicated reference table. Note that it is assumed that two identical records are matches, this task is related to the task of entity disambiguation.

Further, when relationships between entities are added, each of the three problem statements must also take into account the relationships between records/mentions. Entity resolution techniques must take into account these relationships as they are significant to disambiguation of entities.

Evaluation of Entity Resolution

A quick note on the evaluation of entity resolution. For pairwise metrics we consider Precision and Recall (e.g. F1 scores), as well as the cardinality of the number of predicted matching pairs. Cluster level metrics take into account purity, completeness, and complexity. This includes cluster-level precision and recall, closest cluster, MUC, Rand Index, etc. However, there has been little work on the correct prediction of links.

The Algorithms of Entity Resolution

This section includes a brief overview of algorithmic basis proposed by Lise and Ashwin to provide a context for the current state of the art of Entity Resolution. In particular, they discussed Data Preparation, Pairwise Matching,  Algorithms in Record Linkage, Deduplication, and Canonicalization. They also considered collective entity resolution algorithms, which I will briefly mention. Of course, they went into more depth on this section than I will, but I hope to provide a good overview.

Data Preparation

The first tasks of schema and data normalization are required preparation for any Entity Resolution Algorithms. In this task, schema attributes are matched (e.g. contact number vs phone number), and compound attributes like addresses are normalized. Data normalization involves converting all strings to upper or lower case and removing whitespace. Data cleaning and dictionary lookups are also important.

Initial data prep is a big part of the work; smart normalization can go a long way!

The goal is to construct, for a pair of records, a “comparison vector” of similarity scores of each component attribute. Similarity scores can simply be Boolean (match or non-match) or they can be real values with distance functions. For example, edit distance on textual attributes can handle typographic errors. Jaccard coefficients and other distance metrics can be used to compare sets. Even phonetic similarity can be used.

Pairwise Matching

After we have constructed a vector of component-wise similarities for a pair of records, we must compute the probability that the pair of records is a match. There are several methods for discovering the probability of a match. Two simple proposals are to use a weighted sum or average of component similarity scores, and to use thresholds. However, it is extremely hard to pick weights or tune thresholds. Another simple approach can be rule based matching, but manual formulation of rule sets is difficult.

One interesting technique is the Fellegi & Sunter Model: given a record pair, r = (x,y) the comparison vector, is γ. If M is the set of matching pairs of records and U is the set of non-matching pairs of records, linkage decisions are based on the probability of γ given r ∋ M divided by the probability of γ given r ∋ U. Further, you can decide if a record is a match or not based on error bounds, μ and λ that create thresholds for whether a record is a match, a non-match, or it is simply uncertain.

In practice, Fellegi & Sunter requires some knowledge of matches to train the error bounds, therefore some form of supervised learning is required. In general, there are several techniques for Machine Learning algorithms that can be applied to ER – Decision Trees, SVMs, Ensembles of Classifiers, Conditional Random Fields, etc. The issue is Training Set Generation since there is an imbalance of classes: non-matches far outnumber matches, and this can result in misclassification.

Active Learning methods and Unsupervised/Semi-supervised techniques are used to attempt to circumvent the difficulties of training set generation. Lise and Ashwin propose Committee of Classifiers approaches and even crowdsourcing to leverage active learning to build a training set.

Constraints

There are several important forms of constraints that are relevant to the next sections, given a specific mention, Mi:

  1. Transitivity: If M1 and M2 match, M2 and M3 match, then M1 and M3 must also match.
  2. Exclusivity: If M1 matches with M2, then M3 cannot match with M2
  3. Functional Dependency: If M1 and M2 match, then M3 and M4 must match.

For these broad classes of constraints there are positive and negative evidences of specific constraints, e.g. there is a converse to the match constraint for a non-match or vice-versa. You may even consider hard and soft constraints for the constraint types, as well as extent: can the constraint be applied globally or locally?

Based on these constraints, you can see that transitivity is the key to deduplication. Exclusivity is the key to record linkage and functional dependencies are used for data cleaning. Further constraints like aggregate, subsumption, neighborhood, etc. can be used in a domain specific context.

Specific Algorithms for Problem Domains

Record Linkage: propagation through the exclusivity constraint means that the current best solution is Weighted K-Partite Matching. Edges are pairs between records from different data sets whose weights are the pairwise match score. The general problem is NP-hard, therefore the common optimization is to perform successive bipartite matching.

Deduplication: propagation through transitivity leads to a clustering based entity resolution. There are a variety of clustering algorithms, but the common input is pairwise similarity graphs. Many clustering algorithms may also require the construction of a cluster representative or a canonical entity. Although hierarchical clustering or nearest neighbor can be used, the recommended approach is Correlation Clustering.

Correlation Clustering uses Integer Linear Programming to maximize a cost function that places positive and negative benefits of clustering mentions x,y together, such that the Transitive closure is satisfied. However, solving ILP is NP-hard, therefore a number of heuristics are used to approximate the cost function including Greedy BEST/FIRST/VOTE algorithms, the Greedy PIVOT algorithm, and local search.

Canonicalization: Selection of the mention or cluster representative that contains the most information. This can be rule based (e.g. longest length), or for set value attributes a UNION. Edit distance can be used to determine a most representative centroid, or use “majority rule”. Other approaches include the Stanford Entity Resolution Framework (blackbox).

Collective Approaches

If the decision for cluster-membership depends on other clusters, we can use a collective approach. Collective approaches include non-probabilisitic approaches like similarity propagation, or probabilistic models including generative frameworks, or simply a hybrid approach.

Generative probabilistic approaches are based on directed models, where dependencies match decisions in a generative manner. There are a variety of approaches, notably based on LDA and Bayesian Networks. Undirected probabilistic approaches use semantics based on Markov Networks, to the advantage that this allows a declarative syntax based on first-order logic to create constraints. Several suggested approaches include Conditional Random Fields, Markov Logic Networks (MLNs), and Probabilistic Soft Logic.

Probabilistic soft logic introduces reverse predicate equivalence meaning that the same relation with the same entity gives evidence of two entities being the same. This is not true logically, but can allow us to predict the probability of a match with truth values in [0,1]. A declarative language is used to define a constrained continuous Markov random field in first order logic, and we can use relaxed logical operators. This has significant implication for scaling improvements.

Scaling to Big Data

Blocking/Canopy Generation: The first approach to scaling to big data is to use a blocking approach. Consider a Naïve pairwise comparison with 1,000 business mentions from 1,000 cities. This is 1 trillion comparisons, which is 11.6 days with microsecond comparisons! However, we know that business mentions are probably city-specific, therefore only comparing mentions within cities reduces the comparison space to the more manageable 1 billion comparisons, which takes only 16 minutes at the same rate.

blocking

Image 4: Optimizing Blocking Heuristics

Although some matches can be missed using blocking techniques, the key is to select a blocking algorithm that minimizes the subtraction of the set of matching pair records and those satisfying the blocking criterion as in the Venn diagram above, and maximizes the intersection. Common blocking algorithms include hash based blocking, similarity or neighborhood based blocking, or creating complex blocking predicates by combining simple blocking predicates. Another powerful technique is minHash or locality sensitive hashing, which uses distance measures and performs very well, and is recommended as the state of the art for blocking.

The final approach to blocking is more inline with the clustering methods of earlier approaches. In Canopy Clustering, a distance metric is selected with two thresholds. By picking a random mention, we create a canopy using the first threshold and we remove all mentions where the distance from the centroid is less than the second threshold. We continue this process so long as the set M is not empty. This approach is interesting because mentions can be included in more than one canopy, and therefore reduces the chance that the blocking method excludes an actual match.

canopy

Image 5: Using Canopies for Blocking

Distributed ER: The MapReduce framework is very popular for large, parallel tasks and there are several open source frameworks related to Hadoop to apply to your application. MapReduce can be used with disjoint blocking, e.g. in the Map phase (per record computation) you compute the blocks and associate with various reducers, and in the reduce phase (global computation) you can perform the pairwise matching. Several other challenges relating to implementations with MapReduce are discussed in the text and can be reviewed during implementations of ER in MapReduce.

Conclusion

Entity resolution is becoming an increasingly important task as linked data grows, and the requirement for graph based reasoning extends beyond theoretical applications. With the advent of big data computations, this need has become even more prevalent. Much research has been done in the area in separately named fields, and the tutorial by Dr. Getoor and Dr. Machanavajjhala succinctly highlight the current state of the art and gives a general sense of future work.

Specifically, there needs to be a unified approach to the theory which can give relational learning bounds. Since Entity Resolution is often part of bigger inference applications, there needs to be a joint approach to information extraction, and a characterization of how the success (or errors) affects larger reasoning quality. Similarly, there is a need for large, real-world datasets with ground truth to establish benchmarks for performance.

References

[Image 1]: Jentzsch, Anja. LOD Cloud Diagram as of September 2011. Digital image. Wikipedia. Http://lod-cloud.net, 19 Sept. 2011. Web. 13 Aug. 2013.

[Image 2]: McCallum, Andrew, Kamal Nigam, and Lyle H. Ungar. Efficient Clustering of High-Dimensional Data Sets with Application to Reference Matching, 2000.

[Image 5]: Bilgic, Mustafa, Louis Licamele, Lise Getoor, and Ben Shneiderman. “D-Dupe: An Interactive Tool for Entity Resolution in Social Networks.”

Getoor, Lise and Ashwin Machanavajjhala. “Entity Resolution for Big Data.” 19th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. Chicago: ACM SIGKDD, 2013.

This has been a summary of the work done by Dr. Lise Getoor and Dr. Ashwin Machanavajjhala for their Tutorial entitled Entity Resolution for Big Data, accepted at KDD 2013 in Chicago, Il. Further references can be found in the tutorial presentation.

Special thanks to Lise and Ashwin for hosting Cobrain during their run-through of the presentation.

The following two tabs change content below.

Benjamin Bengfort

Chief Data Scientist at Cobrain
Benjamin is a data scientist with a passion for massive machine learning involving gigantic natural language corpora, and has been leveraging that passion to develop a keen understanding of recommendation algorithms at Cobrain in Bethesda, MD where he serves as the Chief Data Scientist. With a professional background in military and intelligence, and an academic background in economics and computer science, he brings a unique set of skills and insights to his work. Ben believes that data is a currency that can pave the way to discovering insights and solve complex problems. He is also currently pursuing a PhD in Computer Science at the University of Maryland.

Latest posts by Benjamin Bengfort (see all)

This entry was posted in Events, GuestPost, Reviews, Tutorials and tagged , , , . Bookmark the permalink.

One Pingback/Trackback

  • Pingback: Entity Resolution for Big Data « Another Word For It

  • John Lehmann

    NIST’s Textual Analysis Conference (TAC)’s Knowledge Base Population (KBP) task is one of the if not the most significant evaluation for technologies which automatically build knowledge bases by extracting entities from text (and linking them). Participants from Stanford to Microsoft. Lots of research and information there. Here’s the task description:
    http://www.nist.gov/tac/2013/KBP/EntityLinking/guidelines/KBP2013_EntityLinkingTaskDescription_1.0.pdf

    • Lukas Schroeder

      Nice! Has anyone applied this to fill e.g. a corporate knowledge base system of support teams with e.g. the contents of a Wiki maintained by product management?

      • http://www.bengfort.com/ Benjamin Bengfort

        Hi Lukas,

        Actually, Jennifer Rullmanm at Red Monocle (http://www.redmonocle.com/) seems to be applying some of these techniques in particular to corporate support documents. It has some very interesting backend analytics to enable better user queries. They’re also local- so might be worth checking out!

        Ben

    • http://www.bengfort.com/ Benjamin Bengfort

      Hi John,

      Thanks! TAC2013 has a very interesting training and test set- I will definitely use this in my algorithm validations in the future. Excellent link!

      Ben

  • jj91709

    Hi Benjamin, I enjoyed the article. Do you know if there has been any practical implementations of this algorithm? I am exploring approaches for linking party information and wonder if this approach has been implemented in any software packages and/or programming languages. Thanks!