Dec 202016
 

Today I got twice lucky and got two paper accepted to one of the best conferences in my field: WWW’17.

Here I would like to talk about THE one that I worked the hardest during the last 6 month and that magically got accepted from the first try.

In the paper we present a detailed analysis of what distinguishes a successful online petition from a failed one. We study what are the effects of the social media and front page promotion on the petition’s performance and which models are best suited to model signature time evolution. 

Multidimensional time-series have been the subject of intense research over the last decades. However, applying classical time-series techniques to online content is challenging, as web data tends to have data quality issues and is often incomplete, noisy, or poorly aligned. In this paper, we tackle the problem of predicting the evolution of a time series of user activity on the web in a manner that is both accurate and interpretable, using related time series to produce a more accurate prediction. We test our methods in the context of predicting signatures for online petitions, using data from thousands of petitions posted on The Petition Site – one of the largest platforms of its kind. We observe that the success of these petitions is driven by a number external factors, including their promotion through social media channels and on the front page of the petitions platform. The interplay between these elements remains largely unexplored. The model we propose incorporates seasonality, aging effects, self-excitation, external shocks, and continuous effects. We also were careful to ensure that all model parameters have simple interpretations. We show through an extensive empirical evaluation that our model is significantly better at predicting the outcome of a petition than state-of-the-art techniques.

In short, there are a few cool findings that are worth checking out:

  1. It seems that social media (middle) has a prolonged impact on the signature counts, compared to the self excitation and front page effect. Moreover, prediction models are usually good to catch signature decay rather than rise (left).
    Influence function
  2. Various petitions (successful, failed, font-page promoted) has different signature gain evolution, i.e., (1) failed petitions exhibit strong daily fluctuations, while having the lowest intensity (1st, 2nd column), (2) front-page promoted petitions obtain their peak signature counts during the first hours of the promotion (around 2-3 day), which is similar to peak counts for the failed petitions (3rd column), (3) most of the successful petitions does not only acquire all the signatures during a few initial days but also have the longest aging of the popularity (4th column).
    parameters
  3. Front page promotion seems to have a strong effect on the speed at which signatures are acquired. However, we show that being already successful is not sufficient to be promoted, thus, the statement “already successful petitions are promoted on the front page” does not hold.
  4. People are tweeting about the petitions that they sign, as well as their followers reciprocate to support those petitions. on the median, it takes about 15 minutes to tweet about the signature. In 26% of the cases with single human twitter accounts, petitions are signed only after a user tweet about, out of which about 30% happen after a retweet.
  5. Both petition’s signatures and tweets exhibit strong circadian nature.
    circadian
  6. It is hard to distinguish between possible successful and failed petitions by the first 24 hours, since about 60% of the successful petitions have similar counterparts among the failed ones.
  7. We have collected the data about multiple petitions, their metadata, signatures, tweets and front page rankings. The data is very rich and still has a lot to discover 🙂

Overall, it was a great experience to do research with such an awesome team I had! Camera ready is soon to be attached 🙂

Jul 162014
 

I am heading straight into the candidacy exam in a month. It is quite a challenging time and therefore I wanted to start writing about one of the papers I chose for the exam – Elementary: Large-scale Knowledge-base Construction via Machine Learning and Statistical Inference.

Elementary is a Knowledge Base Construction (KBC) that encapsulates a range variety of resources and techniques under one umbrella – statistical inference. This system employs Markov logic, probabilistic graphical models, to incorporate both rule based approach and ML (such as Conditional Random Fields) to construct the KB with a certain uncertainty ;). On the other hand, it leverages the dual decomposition to decompose the optimisation functions as a trade-off between uncertainty and scalability. Here, they try to optimise the overall cost of the system, which is cost of having false positive and false negative inference.

The system is claimed to be a universal framework for KBC, however, it is not clear how some problems, apart from classification, segmentation and clustering, could be solved. The main intuition for this statement, is that the only problems that can be solved are those that have Markov Logic Network(MLN) representation. Apart from this, it is also unclear how the further maintaining and updating is performed, as it is well known that KBs are tend to evolve rapidly over time. Moreover, it does not support ‘cold’ start as, first, entities should be manually chosen, second, relation extraction is fully relies on the distant supervision. Basically, concept extraction should be managed by either user provided functionality or user should be an expert in the field and know the most interesting entities.

High-level process of KBC is as follows:

  1. Entities and relations should be chosen prior to the execution
  2. Corpus of the documents should be chosen
  3. This is a feature extraction (FE) step. Basically, we are up to the filling two tables: entities mentions, relation mentions (not only relations we want to infer but also some that will help to deduce inferred relations).
  4. This step manipulate with the collected evidence and put it all to the MLN representation, domain rules, ML (like CRF, logistic regression, clustering etc.). Finally, rule weighting is performed
  5. Now statistical inference over the weighted MLN can be run. As a result, we make inference on the parts of the future KB
  6. Iterate over 3 and 4 again if the results are not satistiable

And now some more details about the system and underlying approach.

As it was mentioned earlier the system relies on the MLN and consists of three parts:

  • schema is used to specify which data will be provided and generated, together with the information on the instances that are to be inferred.
  • evidence is a database representation of a schema, i.e., instances of the schema tables, or in other words, presence of the schema in the corpora (extracted concepts, co-occurrence, relations etc)
  • rules serve two purposes, constrain the inference (from domain knowledge etc) and guide the inference process (conclusions from the first order logic representation of the rules). Rules are weighted by the level  of certainty it guarantees (could be either learned or heuristically set).

Knowing what we would like to get from the system, a lot of tasks could be classified to one of the problem classes: classification(logistic regression), segmentation (conditional random fields), correlated clustering. Aforementioned tasks can be split into smaller subtasks and reduced together (when their subtasks are solved independently) with certain level of confidence. Subtasks reduction is performed with dual decomposition []. This gives us flexibility in two ways:

  • split bigger tasks into smaller one’s to reach significant speed up and efficiency
  • different/multiple input sources could be treated by different processed and further combined

Therefore, both conflicting input sources and scalability can be solved simultaneously.

Finally, rules, constrains, execution paths with different inputs are represented as MLN. Apparently, in order to accounted, each component should be represented in first order logic and any adjustment to the rules and immediately applied with the further processing, i.e., optimal path to the inferred variable will ensure each rule satisfiability with certain level of confidence (rule weight). Statistical inference is applied on top of the MLN representation so that guaranty the most probable inference for a variable. First, the rules can be scoped, i.e., the number of rules can be minimised by manually adding a scoping rule to reduce the space. For example, you can specify that there are N-1 adjacent pairs for the query Adjacent(element, element). Second, a weight for the rule might be set proportional to some of the parameters in the rule, e.g., making a rule more valuable if it contains the mention of the word “Obama”.

Evidence is a presence in the corpora any of the schema items. In other words, if we have co-occurrence predicate in the schema we would like to obtain co-occurrence statistics for concepts/instances. Each document treated as a set of possible overlapping text spans. Based on this notion, three tables are populated (evidences):

  • entity-mention M_E for each entity E, e.g., PERSON(Julia, TextSpans(doc_i, positions)). This table is filled for the entities that are chosen by the user
  • relationship-mention R_E for each relation (with the arity of the relation representation, e.g., Mention(a, b) – has arity 2)
  • relationship table which can be obtained from the two previous tables.
  • Finally, each text span is characterised with the feature vector: F_m(T[E], V_m), where V_ms are the features and T[E] is the list of text spans [entities].

Apparently, the necessity to have features puts a requirement to have annotated data, which could be filled with the distant supervision patterns.

The system basically leverages three main subproblem, i.e., entity linking and relation extraction on top of former and domain knowledge incorporation. Last subproblem is mainly manually curated unlike the first two due to their nature.

Entity extraction example: canonical names, string matching, regular expressions and NER (named entity recognition) can be used as a source of features for the MLN. After extracting the features, rules for the entity linking could be determined. Constrains could be set as well, e.g., if mention is an entity -> then it should have been detected by the NER etc.

Relation extraction could be simplified to the following. Commonly classification on the linguistic patterns is used. Among features are the following: pattern similarity, frequency based filtering, l1-norm regularisation [1]. Multiple approaches could be incorporated, however, after the rules for the inference should be specified alike in the entity extraction example.

To make it clearer let’s consider an example. We have a set of rules where x is a key phrase and y is a label for the sentence. We would like to infer if a “Ronaldo” is in a sentence that the other key-phrase is also sport related :

  1. Mention(x,y) and x == “Ronaldo” => SportMention(x, y)                                   | w = 4
  2. SportMention(x, y) and Mention(x1, y) and x1 != x=> SportMention(x1, y)  | w = 3
  3. Mention(x_i, y) and each x_i != “Ronaldo” => not SportMention(x1, y)        | w = 10

Let’s say we are interested in inferring SportMention having a set of evidences, i.e., we would like to deduce a relation SportMention between x and y having a set of rules/constrains and set of evidences.

Set of rules can be represented as a MLN with activation functions proportional to the weight of the rules (btw, rule weights can be either learned or heuristically set – here I did it heuristically and proportionally to the level of believe I have in the rule:)). Basically, having an annotated corpus we would like to compute the misses

Screen Shot 2014-07-23 at 21.46.46

At this stage, we can simply impose the following problem: which node inference has the most weight and which combination of rules minimises the overall error for some training examples, knowing the dependencies and constrains.

Coming back the core subproblems of the system: classification, segmentation and correlated clustering. These are the approaches that are matched to the problems in the KBC and that enable decomposition of the problem. So that, KBC could be solved in parallel.

[1] Zhu, J., Nie, Z., Liu, X., Zhang, B., & Wen, J. (2009). Statsnowball: A statistical approach to extracting entity relationships. In Proceedings of the International Conference on World Wide Web (pp. 101–110).

Jun 212014
 

I haven’t posted for a while, I hope to change it these days:)

I am continuing working on amazing project about scientific knowledge conceptualisation. The first step on the way to reach nirvana with my project is to create a methodology for ontology creation ( aka __repr__ of the knowledge base). Talking about knowledge base and its construction will clearly lead to the problem of disambiguation and necessity of removing duplicates in the knowledge base.

Obviously, construction of the ontology expects the presence of different sources as possible input, e.g., different classifications, glossaries, concepts from various papers. All those sources might produce equivalent concepts but represent/name them differently.

Here, I would like to talk about the simplest approach to resolve disambiguations, misspelling various way to write the same thing – syntactic resolution. Conflicts resolution for multiple source of information on the syntactic level includes the following: detect whether two concepts are the same based on the syntactic structure of the aforementioned.

To tackle this problem, I’ve developed a tool for multiple sources duplicate deletion with human curation upon uncertain pairs of concepts. The tool works as follows:

  • Create an index over all possible words appearing in the concepts (with the following components: its name, its stemmed form, form without special symbols), aka inverted index but with pointers to the term in which the word appeared.
  • List of possible term candidates is created for every pair that is suspicious.
  • Each pair is scored according to the following criteria:
    • Number of equal words (lemmatised)
    • Order of the words, or how the position of words differs
    • Absence or presence of extra first or last word, as it might change the meaning a lot, e.g., Weather and Weather Model, Learning and Adaptive learning.
    • One of the things that is missing is synonyms check

Created set of terms is searched in the article’s corpora. Here, whole article text is normalised (lemmatised, simplified(w/o stop words) etc.) together with the terms in the initial seed. Each term occurrence in the article is detected and accumulated in the maps, e.g., {norm_term: [(doc, start_pos, end_pos, context), …]}

When the occurrences are retrieved the most related pairs are detected, simply by computing a weighted co-occurrence for a pair:

  • Each pair (a, b) can acquire a max of 1 weight in an article.
  • If a pair (a, b) appear in one sentence – the weight is increased by 1, else it is increased by a weight proportional to the distance between terms in the article.
  • Finally, each weight is computed as follows:
    • normalised by a min(count(a), count(b)) for accumulated weight that is smaller than this min
    • or else return 1

Here is a basic representation of the graph that will appear after applying the procedure above.

cooccurrence_example

Out of the available pairs generated from the articles, we select the most close to each other (i.e., pairs with the maximal score – sum(weights)/#of articles where pair occur) and proceed further with them.

Next problem, when the initial seed is filtered, terms are extracted from the text and pairs are determined, is to leverage the syntactic structure of a sentence, i.e. parse tree, to extract possible extensions to the seed ontology and detect possible (subj -> predicate -> obj) triplets, where subj and obj are noun phrases and are possible present in an ontology, and predicate is a possible relations between subj and obj. Here we solve three problems, detect a predicate between suggested pairs of terms, expand terms (e.g., human -> human health), detect a predicate between a given terms and possible other terms that is not in our seed.

The procedure is as follows and shown on the figure below:

  • Split the article into sentences.
  • Parse the sentences with the parser (dependency one)
  • Identify concepts (either form defined list of extract with some state-of-the-art NER)
  • Look up for the aka NpVpNp syntactic structure (the set of patterns can be extended, as it is in [1])
  • Finally compare the predicate (Vp) with the verb ontology and produce triplets and possible concepts expansions (aka bigger noun phrase for the extracted concept, e.g. climate -> climate change).
  • Each triplet extracted are aggregated over the whole corpora and the triplet with the most evidence is suggested to be confirmed by the user.

relation_extractino_pipeline

Together with relation extraction between pair of concepts, we can deal with one concepts and determine possible relations it might be involved in – as it is shown on the example below.

example1example2example3

With pink defined concepts are drawn. Green are the concepts that we determined by searching for relations to the “pink concept”, e.g. having concept storm we examined the syntactic structure and found the predicate causes and objects/concept climate change and vice versa.

Overall, motivated by the recent knowledge base construction [2, 3, 4], I’ve looked up hot to build knowledge bases (here is only basics). Talking about the volumes of data that are to be processed, obviously aka MapReduce approach should be further established. Additionally to the scalability, incremental updates to the constantly evolving system should be build-in.

Overall, relation extraction and concepts expansion showed a good performance with ~83 % precision for both tasks, a 40 % recall for the relation extraction task.

[1] K. Fundel, R. Küffner, and R. Zimmer, “RelEx—Relation extraction using dependency parse trees,” Bioinformatics, vol. 23, no. 3, pp. 365–371, Feb. 2007.
[2] F. Niu, C. Zhang, C. Re, and J. W. Shavlik, “DeepDive: Web-scale Knowledge-base Construction using Statistical Learning and Inference.,” in VLDS, 2012, vol. 884, pp. 25–28.
[3] O. Deshpande, D. S. Lamba, M. Tourn, S. Das, S. Subramaniam, A. Rajaraman, V. Harinarayan, and A. Doan, “Building, Maintaining, and Using Knowledge Bases: A Report from the Trenches,” in Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data, New York, NY, USA, 2013, pp. 1209–1220.
[4] F. Niu, C. Zhang, C. Ré, and J. Shavlik, Elementary: Large-scale Knowledge-base Construction via Machine Learning and Statistical Inference. .

 

Feb 222014
 

Recently, I was reading a paper about building, maintaining and using knowledge bases. I feel that this paper will influence my future research significantly; therefore, I would like to discuss my point of view and thoughts about it.

To begin with, an ontology as a part of the information science is a formal representation of the knowledge. Having such representation may be beneficial to multiple fields and applications, i.e., search, query understanding, recommendations, advertising, social mining, etc. However, there has been a little or no documentation on a whole life cycle of the ontology, including building, maintaining, and using.

In the paper, authors try to answer numerous questions, i.e., what are the pitfalls of maintaining the large knowledge base (KB), what is the influence of the user on the system, how continuous updates and integration should be handled etc. The following choices are among the most distinguishable decisions:

  • Construction of the global ontology-like KB based on Wikipedia, i.e., the KB attempts to cover the entire world capturing all important concepts, instances and relations). Approach that was chosen for constructing a global KB has obvious disadvantages for domain specific construction. In case of building a domain specific ontology, e.g. computer science (CS), it is  unclear how to limit efficiently the Wikipedia mining process for capturing only CS related topics.
  • Enrichment the KB with additional sources, i.e., requirement to enhance the set of instances of the KB has paved the way for involving additional sources with more specific information. Combining several sources of knowledge always lead to the necessity of aligning/merging. This is a process that require information about the context and human intervention. On the other hand, only limited number of resources were processed to enlarge the KB. In my opinion, mining scientific articles in various domains will enrich the KB even further and will give the necessary depth of human knowledge in state of the art domain.
  • Relationships extraction from the Wikipedia, i.e., wikipedia pages that are connected to the KB concepts analysed extensively with well known natural language processing techniques to get free-form relations for concept pairs. By free-form it is assumed that there is no predefined set of possible relations in the KB. This gives a certain freedom but might limit the search as the number of such relations may grow infinitely.
  • KB updates are performed as a rerun of the KB construction pipeline from the scratch. Clearly, this imposes several disadvantages:
    • To rerun the whole KB construction takes substantial time.
    • As the KB is curated by the human (analyst) it is not clear how and to what extent such curation should be used in the future after the rerun. Moreover, the questions is how to utilize preceding analyst intervention to the newly generated KB. In particular, an analyst might change the edge weights, however, if this edge is not present in the newly constructed wiki DAG (or a node will be renamed) is won’t be utilised.
    • Regardless the construction process, an incremental update of the KB seems to be more logical in terms of speed and work facilitation for the analysts. Additionally, it is unclear how a single person can curate the KB of the entire world. In my opinion, aforementioned problems of curation should be placed on multiple people, e.g., crowdsourced.
    • It is unclear how conflicts are resolved when combining different sources of information, e.g., controversial relations, etc.

Aforementioned system design imposes some limitations. First, relation types are not managed and may lack the expressiveness that might be required for some applications, e.g., explainedBy, modelIn, methodsIn, importantIn, etc. This information can be extracted from not only Wiki pages, infoboxes and templates, but also from templates in the bottom of the page. Second, the DAG construction from the cyclic graph extracted from Wikipedia requires additional verification. The constructed model for the weight dissemination in the cyclic graph includes only three parameters, i.e., co-occurrences of the terms in the Web and in the Wiki lists, and name similarity. Third, KB model might benefit from the scientific paper analysis and integration of this information to the KB. Finally, the system might benefit from the data curation by means of crowdsourcing to improve the accuracy and facilitate user contribution.