Feedzai Research

Pushing the state-of-the art to prevent Financial Crime

Feedzai is developing advanced AI and Engineering solutions to ensure frictionless money flows in financial services while keeping humans safe from financial crime. Feedzai Research thrives to push the state-of-the-art, every day, to develop ethical cutting edge technology that will stay one step ahead of the fraudsters of tomorrow.

Research Areas

Feedzai Research investigates a wide range of topics in Machine Learning, AI Ethics, Systems Research and Data Visualization. Our teams carry out extensive studies to design new methodologies and algorithms, and perform careful data driven experiments to validate their new methods. Below you can find a more detailed description for each group.

Machine Learning

The Machine Learning Feedzai Research group is committed to advance state-of-the-art machine learning techniques to improve fraud detection, financial crime prevention, anti-money laundering, and detecting and stopping other types of illicit activity.

Moreover, Feedzai Research aims to disrupt the machine learning workflow itself, developing innovative solutions to automate and assist the processes of sampling, feature engineering, labelling and annotation, training, and offline and online evaluation.

Some of the recent group focus is in Recurrent Neural Networks, Active Learning, Transfer Learning, Graph Representation Learning, GNNs, Bayesian Inference, and Optimization.

System Research

The Systems Research group is focused on improving the backend platform, both during training time and during production to reduce hardware costs and time-to-market.

The Systems Research group has been working in new, distributed, stream processing systems, and their quality attributes related to low latency, high throughput, recoverability, checkpointing, scalability, and memory consumption, state management, benchmarking, and more.

The Systems Research group also interfaces with the other Research groups and with the Product team to help bring functionality into the Product.

Fairness + Accountability + Transparency + Ethics

How can we keep people safe from financial crime and avoid unfair discrimination? Can we always understand, isolate and mitigate biases? Can we enhance trust by providing meaningful explanations? Is the complexity of the system reducing accountability and due process?

The FATE research group is working on new approaches to: detect, isolate and mitigate bias in data sets, ML models, and AI systems; improve explainability and interpretability of ML; enable traceability and governance of the ML workflow; explain the impact of models on fairness and transparency; and enhance ethical and compliant practices.

FATE is also establishing collaborations with top universities, community groups and other companies to raise awareness of the ethical questions on AI and society

Data Visualization

The Data Visualization group addresses the challenges of our main personas – Fraud Analysts and Data Scientists – while they investigate financial crime, or while they analyze datasets or models.

For example, a level-1 Fraud Analyst normally wants to make a decision in less than 10 seconds. Our research helps decide what model explanations to show or how to highlight/hide historical data. On the other hand, Data Scientists might have tens of hours to analyze large datasets. Data Visualization Research helps them make sense of data via data understanding visualizations and helps them analyze models via model performance graphs and bias reports.

The group has several areas of interest, including design systems, grammar of graphics, temporal data, uncertainty, geo-visualization, and more.


Paper “Promoting Fairness through Hyperparameter Optimization” presented at ICLR 2021 Workshop on Responsible AI (RAI)

Paper “Weakly Supervised Multi-task Learning for Concept-based Explainability” presented at ICLR 2021 Workshop on Weakly Supervised Learning (WeaSul)

Paper “How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations” presented at FAccT 2021, the ACM Conference on Fairness, Accountability, and Transparency

Pedro Saleiro gives tutorial “Dealing with bias and fairness in AI systems” at AAAI 2021 together with Rayid Ghani (CMU) and Kit Rodolfa (CMU)

Feedzai Research got two papers, both on innovations in explainability, accepted at the NeurIPS 2020 workshop on Human And Machine in-the-Loop Evaluation and Learning Strategies.

Paper “Machine learning methods to detect money laundering in the Bitcoin blockchain in the presence of label scarcity” accepted for presentation at the KDD Workshop on ML in Finance and accepted for publication at the ICAIF 2020.

“Interleaved Sequence RNNs for Fraud Detection” paper accepted at KDD 2020.

“Dealing with Bias and Fairness in Data Science Systems: A Practical Hands-on Tutorial” accepted at KDD 2020.

Fortune interviews Pedro Bizarro on the effects of coronavirus crisis on cybercriminal behavior.

Project CAMELOT, in partnership with CMU, IST, ULisboa, UCoimbra.

Opinion article about AI and Regulation (Portuguese).

Congressional testimony on “Equitable Algorithms: Examining Ways to Reduce AI Bias in Financial Services” based on joint work with Pedro Saleiro.

Pedro Saleiro interview about what to expect in this decade for AI (Portuguese).

Feedzai co-organizes Lisbon NeurIPS meetup.

Feedzai joins Instituto Superior Técnico Partner Network. Professor Mário Figueiredo is now Feedzai Professor in Machine Learning.

Latest Publications

“Weakly Supervised Multi-task Learning for Concept-based Explainability” https://arxiv.org/abs/2104.12459

“Promoting Fairness through Hyperparameter Optimization” https://arxiv.org/abs/2103.12715

How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations, https://arxiv.org/abs/2101.08758

GuiltyWalker: Distance to illicit nodes in the Bitcoin network, https://arxiv.org/abs/2102.05373

TimeSHAP: Explaining Recurrent Models through Sequence Perturbations, https://arxiv.org/abs/2012.00073

Teaching the Machine to Explain Itself using Domain Knowledge, https://arxiv.org/abs/2012.01932

A Bandit-Based Algorithm for Fairness-Aware Hyperparameter Optimization, https://arxiv.org/abs/2010.03665

Machine learning methods to detect money laundering in the Bitcoin blockchain in the presence of label scarcity. arXiv preprint arXiv:2005.14635

Interleaved Sequence RNNs for Fraud Detection. arXiv preprint arXiv:2002.05988 and video

ARMS: Automated rules management system for fraud detection. arXiv preprint arXiv:2002.06075

Automatic Model Monitoring for Data Streams. arXiv preprint arXiv:1908.04240

Automatic detection of points of compromise. U.S. Patent Application No. 16/355,562

Computer memory management during real-time fraudulent transaction analysis. U.S. Patent Application No. 16/102,570


Help us build the future of fighting fraud with AI

View Open Roles