Point Processes, structure2vec, and mo’

Extremely difficult, but definitely interesting read below with the first one in regards to bioinformatics. I also included links to repos accordingly, although I doubt I would even get around to checking structure2vec out

I’ll finish reading the document another time; but, wow, structure2vec with its DE-MF (discriminative embedding mean field) and DE-LBP (discriminative embedding loopy belief propagation) variants just scream significant improvements opposed to a traditional machine learning algorithm processces, such that feature spaces are learned through embedded latent variable models against discriminative data, which the paper explicates on more than thoroughly. Where u at AI class?

Discriminative Embeddings of Latent Variable Models for Structured Data (structure2vec):

Abstract: Kernel classifiers and regressors designed for structured data, such as sequences, trees and graphs, have significantly advanced a number of interdisciplinary areas such as computational biology and drug design. Typically, kernels are designed beforehand for a data type which either exploit statistics of the structures or make use of probabilistic generative models, and then a discriminative classifier is learned based on the kernels via convex optimization. However, such an elegant two-stage approach also limited kernel methods from scaling up to millions of data points, and exploiting discriminative information to learn feature representations.

We propose, structure2vec, an effective and scalable approach for structured data representation based on the idea of embedding latent variable models into feature spaces, and learning such feature spaces using discriminative information. Interestingly, structure2vec extracts features by performing a sequence of function mappings in a way similar to graphical model inference procedures, such as mean field and belief propagation. In applications involving millions of data points, we showed that structure2vec runs 2 times faster, produces models which are 10, 000 times smaller, while at the same time achieving the state-of-the-art predictive performance.

https://arxiv.org/pdf/1603.05629.pdf

 

 

PtPack: The C++ Multivariate Temporal Point Process Package

 

Features

  • Learning sparse interdependency structure of terminating point processes with applications in continuous-time information diffusions.
  • Scalable continuous-time influence estimation and maximization.
  • Learning multivariate Hawkes processes with different structural constraints, like: sparse, low-rank, customized triggering kernels, etc.
  • Learning low-rank Hawkes processes for time-sensitive recommendations.
  • Efficient simulation of standard multivariate Hawkes processes.
  • Learning multivariate self-correcting processes.
  • Simulation of customized general temporal point processes.
  • Basic residual analysis and model checking of customized temporal point processes.
  • Visualization of triggering kernels, intensity functions, and simulated events.

Code example of multivariate Hawkes model: Process:https://github.com/dunan/MultiVariatePointProcess

Definition: Hawkes Process (univariate and multivariate models exist)
http://mathworld.wolfram.com/HawkesProcess.html

Awesome archive of quality, quality code: http://www.cc.gatech.edu/~lsong/code.html

Bioinformatics projects for one day… one day… http://cseweb.ucsd.edu/~eeskin/projects.html#discgraph

 

etc.: https://sourceforge.net/projects/ngnms/?source=typ_t4_highlighted

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s