Although computational thinking is growing, it is still tricky to propose or discuss theories in algorithmic terms, at least in the cognitive sciences. One reason for this difficulty may be that those standing for computational thinking do not invest enough in setting the groundwork. One such investment is, at each discussion, specifying the primitive operations, to set precisely the level of abstraction. In this post, I give an illustrative example of the problem and a practical example from the neuroscience of time perception.
Take the example of the multiplication algorithm we learned at school. We can still sketch and solve…
Reinforcement learning has entered the spotlight recently with accomplishments such as AlphaGo, and is supposedly one of our best shots at Artificial General Intelligence — or at least more general intelligence. In this post, I will trace some of its history back to the study of operant conditioning by Skinner.
The real question is not whether machines think but whether men do — B. F. Skinner
Imagine yourself going to Europe or the United States, studying themes in the frontier of your field. Moreover, you are applying knowledge in a well-developed industry, and advancing a well-paid career very intellectually rewarding and demanding. After finishing your Ph.D. you can go back to your home country, where the industry does not have demand for the technical expertise of your level, or you can keep learning with top-notch researchers and advancing your career.
The dilemma should be clear: It is heavy the decision of turning your back to your home country. The country that nurtured you in this specific…
In many companies, Data Analysts spend a lot of time moving models around. For this purpose, they have to make sure dependencies are all compatible, and their environment does not mix libraries’ versions. Sometimes, people even move models’ weights around in excel spreadsheets to use inside other software, opening a huge gap between the training process and the final deployed model.
One of the reasons these practices exist is that there is no widespread protocol on how to serve models practically and without too much overhead. We wanted to use our models directly from Sklearn (or similar), in some…
Recently the CISS 2 summer school on deep learning and dialog systems took place in Lowell. It was organized by the Text Machine Lab of UMass Lowell and Neural Networks and Deep Learning Lab at MIPT (iPavlov project), Moscow. People traveled from around the world to the school, to meet top-notch researchers and learn the state of the art.
Besides the lectures and tutorials, the school held a competition between the participants, concerning team projects that we should carry out during the school. There was limited time to work on the team projects since the lectures and tutorials took up…
Master in Neuroscience and Cognition, Data Scientist @ Nubank. AI and Complexity enthusiast.