博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
SOME USEFUL MACHINE LEARNING LIBRARIES.
阅读量:5313 次
发布时间:2019-06-14

本文共 8968 字,大约阅读时间需要 29 分钟。

from: http://www.erogol.com/broad-view-machine-learning-libraries/

http://www.slideshare.net/VincenzoLomonaco/deep-learning-libraries-and-rst-experiments-with-theano

 

Especially, with the advent of many different and intricate Machine Learning algorithms, it is very hard to come up with your code to any problem. Therefore, the use of a library and its choice is imperative provision before you start the project. However, there are many different libraries having different quirks and rigs in different languages, even in multiple languages so that choice is not very straight forward as it seems.

Before you start, I strongly recommend you to experiment the library of your interest so as not to say " Ohh Buda!" at the end. For being a simple guide, I will point some possible libraries and signify some of them as my choices with the reason behind.

My simple bundle for small projects ----

I basically use Python for my problems, in general. Here are my frequently used libraries.

  •  - Very broad and well established library. It has different functionalities that meet your requisites at your work flow. If you do not need some peculiar algorithms, Scikit-learn is just enough for all. It is predicated with Numpy and Scipy at Python. It also proposes very easy way to paralleling your code with very easy way.
  •  - Other than being a machine learning library pandas is a "Data Analysis Library". It gives very handy features to have some observations on data, just before you design your work flow. It support in memory  and storage functions. Hence, It is especially useful, if your data is up to some large scales that is not easy to be handled via simple methods or cannot be fit into memory as a whole.
  •  -  It is yet another Python library but it is a nonesuch library. Simply, it interfaces your python code to low-level languages. As you type in python like you do Numpy, it converts your code into prescribe low level counterparts and then compile them at that level. It gives very significant performance gains, particularly for large matrix operations. It is also able to utilize from GPU after simple configuration of the library without any further code change. One caveat is, it is not easy to debug  because of that compilation layer.
  •  - It is a natural language processing tool with very unique and salient features. It also includes some basic classifiers like Naive Bayes. If your work is about text processing this is the right tool to process data.

Other Libraries -- (This list is being constantly updated.)

Deep Learning Libraries
  •  - "A machine learning research library". It is widely used especially among deep learning researches. It also includes some other features like Latent Dirichlet Allocation based on Gibbs sampling.
  •  (new) - This is yet another Neural Networks library based on Theano. It is very simple to use and I think one of the best library for quick prototyping new ideas.
  •   - Another young alternative for Deep Learning implementation. "Hebel is a library for deep learning with neural networks in Python using GPU acceleration with CUDA through PyCUDA."
  •   - A Convolutional Neural Network library for large scale tackles. It differs by having its own implemntation of CNN in low level C++ instead of well-known ImageNet implementation of Alex Krizhevsky. It assets faster alternative to Alex's code. It also provides MATLAB and Python interfaces.
  •  - Very similar to Caffe. It supports multi-GPU training as well. I've not used it extensively but it seems promising after my small experiments with MNIST dataset. It also servers very modular and easy development interface for new ideas. It has Python and Matlab interfaces as well.
  •  - This is a library from the same developers of cxxnet. It has additional features after the experience gathered from cxxnet and other backhand libraries. Different than cxxnet, it has a good interface with Python which provides exclusive development features for deep learning and even general purpose algorithms requiring GPU parallelism.
  •  - "PyBrain is short for Python-Based Reinforcement Learning, Artificial Intelligence and Neural Network Library."
  •  - Python based, GPU possible deep learning library released by IDSIA lab. It is at ery early stage of development but it is still eye catching. At least for now, it targets recurrent networks and 2D convolution layers.
Linear Model and SVM Libraries
  •  - A Library for Large Linear Classification. It is also interfaced by Scikit-learn.
  •  - State of art SVM library with kernel support. It has also third-party plug-ins, if its built-in capabilities are not enough for you.
  •  - I hear the name very often but haven't use it by now. However, it seems a decent library for fast machine learning.
General Purpose Libraries
  •  - General usage ML library, similar to Scikit-learn. It supports for different programming languages.
  • - "a scalable c++ machine learning library".
  • - One another general use ML library. "Open source data visualization and analysis for novice and experts". It has Self-Organizing ( I am studying on :) ) maps implementation that diverse it from others.
  • - "SVMs (based on libsvm), k-NN, random forests, decision trees. It also performs feature selection. These classifiers can be joined in many ways to form different classification systems."
  •  - Weka is a very command tool for machine learning with GUI support. If you do not want to code, you can cull the data to Weka and select your algorithm from drop-menu, set the parameters and go. Moreover, you can call its functions from your java code. It supports some other languages as well.
  • - Albeit I am not very fan of those kind of tools, KNIME is another example of GUI based framework. You just define your work-flow by creating a visual work-flow. Carry some process boxes to workspace, connect them as you want, set parameters and run.
  •  - Yer another GUI based tool. It is very similar to KNIME but out of my practice, it has wider capabilities suited different domain of expertise.
Others
  •  - Monte (python) is a Python framework for building gradient based learning machines, like neural networks, conditional random fields, logistic regression, etc. Monte contains modules (that hold parameters, a cost-function and a gradient-function) and trainers (that can adapt a module's parameters by minimizing its cost-function on training data).
  •  - From the user’s perspective, MDP is a collection of supervised and unsupervised learning algorithms and other data processing units that can be combined into data processing sequences and more complex feed-forward network architectures.
  •  is another great library which focuses on statistical models and is used mainly for predictive and exploratory analysis. If you want to fit linear models, do statistical analysis, maybe a bit of predictive modeling, then Statsmodels is a great fit.
  •  is another statistical learning library which is similar to Scikit-learn in terms of its API. It has cross-validation and diagnostic tools as well, but it is not as comprehensive as Scikit-learn.
  •  is the tool of choice for Bayesians. It includes Bayesian models, statistical distributions and diagnostic tools for the convergence of models. It includes some hierarchical models as well. If you want to do Bayesian Analysis, you should check it out.
  •  is topic modelling tool that is centered on Latent Dirichlet Allocation model. It also serves some degree of NLP functionalities.
  • -  Pattern is a web mining module for Python
  • -  is data visualization tool for complicated datasets supporting MAC and Win
  • -  If you like Gradient Boosting models and you like to o it faster and stronger, it is very useful library with C++ backend and Python, R wrappers. I should say that it is far faster than Sklearn's implementation

My computation stack ---

After the libraries, I feel the need of saying something about the computation environment that I use.

  •  - After waste some time with Matlab, I discovered those tools that  empower scientific computing with sufficient results. Numpy and Scipy are the very well-known scientific computing libraries. Ipython is an alternative to native python interpreter with very useful features. Ipython-Notebook is a very peculiar editor that is able to run on web-browser so it is good especially if you are working on a remote machine. Spyder is a python IDE and it has very useful capabilities that makes your experience very similar to Matlab. Last bu not least, all of them are very free. I really suggest to look at those items before you select a framework for your scientific effort.

At the end, for being self promoting I list my own ML codes ----

  •  - this is a very fast clustering procedure underpinned by Kohonen's Learning Procedure. It includes two alternative with basic Numpy and faster at large data Theano implementations.
  •  - It is Matlab code based on C++ back-end.
  •  -  A Matlab code implementing very fast graph based clustering formulated by Replicator Dynamics Optimization.

转载于:https://www.cnblogs.com/emanlee/p/5027524.html

你可能感兴趣的文章
div水平居中且垂直居中
查看>>
怎么在windows7系统我的电脑中添加快捷方式
查看>>
QT - 内存泄漏检测
查看>>
三层架构
查看>>
epoll使用具体解释(精髓)
查看>>
数据库设计笔记
查看>>
JPA进行insert操作时会首先select吗
查看>>
AndroidArchitecture
查看>>
原生JavaScript第六篇
查看>>
JS基础学习3
查看>>
Tennis Championship
查看>>
SQL
查看>>
JavaScript基础-var
查看>>
javascript 进阶篇1 正则表达式,cookie管理,userData
查看>>
安装Endnote X6,但Word插件显示的总是Endnote Web"解决办法
查看>>
python全栈 计算机硬件管理 —— 硬件
查看>>
用WebClinet实现SharePoint上文档库中文件的上传与下载
查看>>
Silverlight和javascript的相互调用
查看>>
SQL Server 2005 Express 附加数据库只读 解决方案
查看>>
opencv中的Bayes分类器应用实例
查看>>