Maryam Hashemzadeh


Research: I'm passionate about leveraging learning-based systems to address complex challenges. Currently, my research focuses on Large Language Models (LLMs) and their application in interactive decision-making, akin to Reinforcement Learning (RL) agents, while emphasizing the prevention of hallucinations. Additionally, I'm exploring how LLMs can enhance generalization in Lifelong learning.

Bio: I'm a research associate at Mila supervised by Sarath Chandar with close collaboration with Marc-Alexandre Côté. I hold an MSc from the University of Alberta, where I conducted research with Martha White and Alona Fyshe, specializing in Offline Reinforcement Learning.

For any inquiries, feel free to reach out to me via mail!

Mail Twitter Scholar LinkedIn

Profile picture

Publications

Offline-Online Reinforcement Learning: Extending Batch and Online RL
Maryam Hashemzadeh, Wesley Chung, Martha White
-, 2021
Paper /
@InProceedings{hashemzadeh2021offline, 
	author = {Maryam Hashemzadeh and Wesley Chung and Martha White}, 
	title = {Offline-Online Reinforcement Learning: Extending Batch and Online RL}, 
	booktitle = {-}, 
	year = {2021}, 
}
From Language to Language-ish: How Brain-Like is an LSTM's Representation of Nonsensical Language Stimuli?
Maryam Hashemzadeh, Greta Kaufeld, Martha White, Andrea Martin, Alona Fyshe
EMNLP, 2020
Paper /
@InProceedings{hashemzadeh2020language, 
	author = {Maryam Hashemzadeh and Greta Kaufeld and Martha White and Andrea Martin and Alona Fyshe}, 
	title = {From Language to Language-ish: How Brain-Like is an LSTM's Representation of Nonsensical Language Stimuli?}, 
	booktitle = {EMNLP}, 
	year = {2020}, 
}
Value signals guide abstraction during learning
Aurelio Cortese, Asuka Yamamoto, Maryam Hashemzadeh, Pradyumna Sepulveda, Mitsuo Kawato, Benedetto De Martino
Elife, 2021
Paper /
@InProceedings{cortese2021value, 
	author = {Aurelio Cortese and Asuka Yamamoto and Maryam Hashemzadeh and Pradyumna Sepulveda and Mitsuo Kawato and Benedetto De Martino}, 
	title = {Value signals guide abstraction during learning}, 
	booktitle = {Elife}, 
	year = {2021}, 
}
Clustering subspace generalization to obtain faster reinforcement learning
Maryam Hashemzadeh, Reshad Hosseini, Majid Ahmadabadi
Evolving Systems, Springer, 2020
Paper /
@InProceedings{hashemzadeh2020clustering, 
	author = {Maryam Hashemzadeh and Reshad Hosseini and Majid Ahmadabadi}, 
	title = {Clustering subspace generalization to obtain faster reinforcement learning}, 
	booktitle = {Evolving Systems, Springer}, 
	year = {2020}, 
}
Exploiting generalization in the subspaces for faster model-based reinforcement learning
Maryam Hashemzadeh, Reshad Hosseini, Majid Ahmadabadi
IEEE transactions on neural networks and learning systems, 2018
Paper /
@InProceedings{hashemzadeh2018exploiting, 
	author = {Maryam Hashemzadeh and Reshad Hosseini and Majid Ahmadabadi}, 
	title = {Exploiting generalization in the subspaces for faster model-based reinforcement learning}, 
	booktitle = {IEEE transactions on neural networks and learning systems}, 
	year = {2018}, 
}

Website template adopted from Michael Niemeyer .