Although computational thinking is growing, it is still tricky to propose or discuss theories in algorithmic terms, at least in the cognitive sciences. One reason for this difficulty may be that those standing for computational thinking do not invest enough in setting the groundwork. One such investment is, at each discussion, specifying the primitive operations, to set precisely the level of abstraction. In this post, I give an illustrative example of the problem and a practical example from the neuroscience of time perception.

The primitives of multiplication

Take the example of the multiplication algorithm we learned at school. We can still sketch and solve…


Image for post
Image for post
Photo by Mladen Milinovic on Unsplash

A friendly introduction to the problem of reinforcement learning with examples from neuroscience

Reinforcement learning has entered the spotlight recently with accomplishments such as AlphaGo, and is supposedly one of our best shots at Artificial General Intelligence — or at least more general intelligence. In this post, I will trace some of its history back to the study of operant conditioning by Skinner.

The real question is not whether machines think but whether men do — B. F. Skinner


Image for post
Image for post
Photo by Damian Patkowski on Unsplash

Thoughts from the opening ceremony at Khipu Latin American AI Conference

Imagine yourself going to Europe or the United States, studying themes in the frontier of your field. Moreover, you are applying knowledge in a well-developed industry, and advancing a well-paid career very intellectually rewarding and demanding. After finishing your Ph.D. you can go back to your home country, where the industry does not have demand for the technical expertise of your level, or you can keep learning with top-notch researchers and advancing your career.

The dilemma should be clear: It is heavy the decision of turning your back to your home country. The country that nurtured you in this specific…


Image for post
Image for post

In many companies, Data Analysts spend a lot of time moving models around. For this purpose, they have to make sure dependencies are all compatible, and their environment does not mix libraries’ versions. Sometimes, people even move models’ weights around in excel spreadsheets to use inside other software, opening a huge gap between the training process and the final deployed model.

One of the reasons these practices exist is that there is no widespread protocol on how to serve models practically and without too much overhead. We wanted to use our models directly from Sklearn (or similar), in some…


Image for post
Image for post
UMass Lowell

Recently the CISS 2 summer school on deep learning and dialog systems took place in Lowell. It was organized by the Text Machine Lab of UMass Lowell and Neural Networks and Deep Learning Lab at MIPT (iPavlov project), Moscow. People traveled from around the world to the school, to meet top-notch researchers and learn the state of the art.

Besides the lectures and tutorials, the school held a competition between the participants, concerning team projects that we should carry out during the school. There was limited time to work on the team projects since the lectures and tutorials took up…

Estevão Uyrá Pardillos Vieira

Master in Neuroscience and Cognition, Data Scientist @ Nubank. AI and Complexity enthusiast.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store