{"id":185,"date":"2017-02-13T18:04:12","date_gmt":"2017-02-13T18:04:12","guid":{"rendered":"http:\/\/sag.art.uniroma2.it\/kelp_wordpress\/?page_id=185"},"modified":"2017-04-05T15:38:10","modified_gmt":"2017-04-05T15:38:10","slug":"kernel-functions-2","status":"publish","type":"page","link":"http:\/\/www.kelp-ml.org\/?page_id=185","title":{"rendered":"Kernel functions"},"content":{"rendered":"<p>Kernel methods (see for example (Shawe-Taylor and Cristianini, 2004)) are a powerful class of algorithms for pattern analysis that, exploiting the so called kernel functions, can operate in an implicit high-dimensional feature space without explicitly computing the coordinates of the data in that space. Most of the existing machine learning platforms provide kernel methods that operate only on vectorial data. On the contrary, KeLP\u00a0has the fundamental advantage that there is no restriction on a specific data structure, and kernels directly operating on vectors, sequences, trees, graphs, or other structured data can be defined.<\/p>\n<p>Furthermore, another appealing characteristic of KeLP\u00a0is that complex kernels can be created by composing and combining simpler kernels. This allows to create richer similarity metrics in which different information from different <a href=\"http:\/\/www.kelp-ml.org\/kelp-javadoc\/current-version\/it\/uniroma2\/sag\/kelp\/data\/representation\/Representation.html\">Representation<\/a>s can be simultaneously exploited.<\/p>\n<p>As shown in the figure below, KeLP completely supports the composition and combination mechanisms providing three abstractions of the <a href=\"http:\/\/www.kelp-ml.org\/kelp-javadoc\/current-version\/it\/uniroma2\/sag\/kelp\/kernel\/Kernel.html\">Kernel<\/a> class:<\/p>\n<ul>\n<li><a href=\"http:\/\/www.kelp-ml.org\/kelp-javadoc\/current-version\/it\/uniroma2\/sag\/kelp\/kernel\/DirectKernel.html\">DirectKernel<\/a>: in computing the kernel similarity it operates directly on a specific\u00a0<a href=\"http:\/\/www.kelp-ml.org\/kelp-javadoc\/current-version\/it\/uniroma2\/sag\/kelp\/data\/representation\/Representation.html\">Representation<\/a>\u00a0that it automatically extracts from the <a href=\"http:\/\/www.kelp-ml.org\/kelp-javadoc\/current-version\/it\/uniroma2\/sag\/kelp\/data\/example\/Example.html\">Example<\/a>s to be compared. For instance,\u00a0KeLP implements <a href=\"http:\/\/www.kelp-ml.org\/kelp-javadoc\/current-version\/it\/uniroma2\/sag\/kelp\/kernel\/vector\/LinearKernel.html\">LinearKernel<\/a>\u00a0on Vector representations, and\u00a0<a href=\"http:\/\/www.kelp-ml.org\/kelp-javadoc\/current-version\/it\/uniroma2\/sag\/kelp\/kernel\/tree\/SubTreeKernel.html\">SubTreeKernel<\/a>\u00a0(Vishwanathan and Smola, 2002) (the SubSet Tree Kernel is a.k.a. Syntactic Kernel), and <a href=\"http:\/\/www.kelp-ml.org\/kelp-javadoc\/current-version\/it\/uniroma2\/sag\/kelp\/kernel\/tree\/PartialTreeKernel.html\">PartialTreeKernel<\/a>\u00a0(Moschitti, 2006) on\u00a0<a href=\"http:\/\/www.kelp-ml.org\/kelp-javadoc\/current-version\/it\/uniroma2\/sag\/kelp\/data\/representation\/tree\/TreeRepresentation.html\">TreeRepresentation<\/a>s.<\/li>\n<li><a href=\"http:\/\/www.kelp-ml.org\/kelp-javadoc\/current-version\/it\/uniroma2\/sag\/kelp\/kernel\/KernelComposition.html\">KernelComposition<\/a>: it enriches the kernel similarity provided by any another kernel. Some KeLP implementations are\u00a0<a href=\"http:\/\/www.kelp-ml.org\/kelp-javadoc\/current-version\/it\/uniroma2\/sag\/kelp\/kernel\/standard\/PolynomialKernel.html\">PolynomialKernel<\/a>, <a href=\"http:\/\/www.kelp-ml.org\/kelp-javadoc\/current-version\/it\/uniroma2\/sag\/kelp\/kernel\/standard\/RbfKernel.html\">RbfKernel<\/a> and\u00a0<a href=\"http:\/\/www.kelp-ml.org\/kelp-javadoc\/current-version\/it\/uniroma2\/sag\/kelp\/kernel\/standard\/NormalizationKernel.html\">NormalizationKernel<\/a>.<\/li>\n<li><a href=\"http:\/\/www.kelp-ml.org\/kelp-javadoc\/current-version\/it\/uniroma2\/sag\/kelp\/kernel\/KernelCombination.html\">KernelCombination<\/a>: it combines different kernels in a specific function. Some KeLP implementations are\u00a0<a href=\"http:\/\/www.kelp-ml.org\/kelp-javadoc\/current-version\/it\/uniroma2\/sag\/kelp\/kernel\/standard\/LinearKernelCombination.html\">LinearKernelCombination<\/a> and\u00a0<a href=\"http:\/\/www.kelp-ml.org\/kelp-javadoc\/current-version\/it\/uniroma2\/sag\/kelp\/kernel\/standard\/KernelMultiplication.html\">KernelMultiplication<\/a>.<\/li>\n<\/ul>\n<p>Finally, the class <a href=\"http:\/\/www.kelp-ml.org\/kelp-javadoc\/current-version\/it\/uniroma2\/sag\/kelp\/kernel\/pairs\/KernelOnPairs.html\">KernelOnPairs<\/a> models kernels operating on <a href=\"http:\/\/www.kelp-ml.org\/kelp-javadoc\/current-version\/it\/uniroma2\/sag\/kelp\/data\/example\/ExamplePair.html\">ExamplePairs<\/a>, which are pairs of objects, e.g., pairs of texts as in (Filice et al., 2015). It can be also used to design a ranking algorithm based on the preference kernel (Shen and Joshi, 2003), using the implementation\u00a0<a href=\"http:\/\/www.kelp-ml.org\/kelp-javadoc\/current-version\/it\/uniroma2\/sag\/kelp\/kernel\/pairs\/PreferenceKernel.html\">PreferenceKernel.<\/a><\/p>\n<div style=\"width: 1858px\" class=\"wp-caption aligncenter\"><img decoding=\"async\" loading=\"lazy\" src=\"http:\/\/www.kelp-ml.org\/wp-content\/uploads\/2017\/02\/kernel.png\" width=\"1848\" height=\"1556\" \/><p class=\"wp-caption-text\">A simplified class diagram of the kernel taxonomy in KeLP.<\/p><\/div>\n<hr \/>\n<h1><span style=\"color: #000000;\">Existing Kernels<\/span><\/h1>\n<ul>\n<li><a href=\"http:\/\/www.kelp-ml.org\/?page_id=719\">Kernels on Vectors<\/a>:\n<ul>\n<li>Linear Kernel<\/li>\n<\/ul>\n<\/li>\n<li><a href=\"http:\/\/www.kelp-ml.org\/?page_id=727\">Kernels on Sequences<\/a>:\n<ul>\n<li>Sequence Kernel<\/li>\n<\/ul>\n<\/li>\n<li><a href=\"http:\/\/www.kelp-ml.org\/?page_id=728\">Kernels on Trees<\/a>:\n<ul>\n<li>Subtree Kernel<\/li>\n<li>SubSet Tree Kernel<\/li>\n<li>Partial Tree Kernel<\/li>\n<li>Smoothed Partial Tree Kernel<\/li>\n<li>Compositionally Smoothed Partial Tree Kernel<\/li>\n<\/ul>\n<\/li>\n<li><a href=\"http:\/\/www.kelp-ml.org\/?page_id=729\">Kernels on Graphs<\/a>:\n<ul>\n<li>Shortest Path Kernel<\/li>\n<li><span class=\"s1\">Weisfeiler-<\/span><span class=\"s2\">L<\/span><span class=\"s1\">ehman<\/span> Subtree\u00a0Kernel<\/li>\n<\/ul>\n<\/li>\n<li><a href=\"http:\/\/www.kelp-ml.org\/?page_id=732\">Kernel Compositions<\/a>:\n<ul>\n<li>Polynomial Kernel<\/li>\n<li>Radial Basis Function\u00a0Kernel<\/li>\n<li>Normalization Kernel<\/li>\n<\/ul>\n<\/li>\n<li><a href=\"http:\/\/www.kelp-ml.org\/?page_id=733\">Kernel Combinations<\/a>:\n<ul>\n<li>Linear Kernel Combination<\/li>\n<li>Kernel Multiplication<\/li>\n<\/ul>\n<\/li>\n<li><a href=\"http:\/\/www.kelp-ml.org\/?page_id=730\">Kernels on Pairs<\/a>:\n<ul>\n<li>Preference Kernel<\/li>\n<li>Uncrossed Pairwise Sum Kernel<\/li>\n<li>Uncrossed Pairwise Product Kernel<\/li>\n<li>Pairwise Sum Kernel<\/li>\n<li>Pairwise Product Kernel<\/li>\n<li>Best Pairwise Alignment Kernel<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<hr \/>\n<h3>References<\/h3>\n<p>Simone Filice, Giovanni Da San Martino and Alessandro Moschitti<em>. Relational Information for Learning from Structured Text Pairs. <\/em>In Proceedings of the 53<sup>rd<\/sup> Annual Meeting of the Association for Computational Linguistics, ACL 2015.<\/p>\n<p>Alessandro Moschitti.\u00a0<em>Efficient convolution kernels for dependency and constituent syntactic trees<\/em>. In ECML, Berlin, Germany, September 2006.<\/p>\n<p>Libin Shen and Aravind K. Joshi.\u00a0<em>An svm based voting algorithm with application to parse reranking<\/em>. In In Proc. of CoNLL 2003, pages 9\u201316, 2003<\/p>\n<p>John Shawe-Taylor and Nello Cristianini.\u00a0<em>Kernel Methods for Pattern Analysis<\/em>. Cambridge University Press, New York, NY, USA, 2004. ISBN 0521813972.<\/p>\n<p>S.V.N. Vishwanathan and A.J. Smola. <em>Fast kernels on strings and trees<\/em>. In Proceedings of Neural Information Processing Systems, 2002.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Kernel methods (see for example (Shawe-Taylor and Cristianini, 2004)) are a powerful class of algorithms for pattern analysis that, exploiting the so called kernel functions, can operate in an implicit high-dimensional feature space without explicitly computing the coordinates of the data in that space. Most of the existing machine learning platforms provide kernel methods that <a href=\"http:\/\/www.kelp-ml.org\/?page_id=185\" rel=\"nofollow\"><span class=\"sr-only\">Read more about Kernel functions<\/span>[&hellip;]<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"parent":112,"menu_order":10,"comment_status":"closed","ping_status":"closed","template":"","meta":[],"_links":{"self":[{"href":"http:\/\/www.kelp-ml.org\/index.php?rest_route=\/wp\/v2\/pages\/185"}],"collection":[{"href":"http:\/\/www.kelp-ml.org\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"http:\/\/www.kelp-ml.org\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"http:\/\/www.kelp-ml.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.kelp-ml.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=185"}],"version-history":[{"count":16,"href":"http:\/\/www.kelp-ml.org\/index.php?rest_route=\/wp\/v2\/pages\/185\/revisions"}],"predecessor-version":[{"id":1009,"href":"http:\/\/www.kelp-ml.org\/index.php?rest_route=\/wp\/v2\/pages\/185\/revisions\/1009"}],"up":[{"embeddable":true,"href":"http:\/\/www.kelp-ml.org\/index.php?rest_route=\/wp\/v2\/pages\/112"}],"wp:attachment":[{"href":"http:\/\/www.kelp-ml.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=185"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}