Papers accepted at Canada AI Conference
06 May 2024We will be presenting 2 papers at the Canadian Conference on AI, May 27-31, 2024, in Guelph, Canada.
Toward a Model of Associative Memory via Predictive Coding
Ehsan Ganjidoost, Jeff Orchard
Abstract: Predictive Coding Networks (PCnets) can lead to brain-like learning algorithms while adhering to biological constraints. Using PCnets can help address various applications and overcome some of the flaws present in current solutions. The Hopfield Network (HN), introduced by John Hopfield, was intended for associative memory and could store memories and recall them using partial information. However, its limitations include limited storage capacity, forgetting less frequent memories, and indecisiveness in ambiguous situations. Although the modern Hopfield network (MHN) tries to overcome these limitations, the theoretical claims may be subjective and dependent on the dataset. The continuous version of MHN seems to address these shortcomings with a cost. In contrast, our generative model of associative memory using PCnet can recall stored memory equally likely and will not remain indecisive in unclear cases. Additionally, this model does not come with any compromises on computation costs compared to MHN and required resources and preserves the same simplicity level as the classic Hopfield network.
Humans Don’t Get Fooled: Does Predictive Coding Defend Against Adversarial Attack?
Junteng Zheng, Jeff Orchard
Abstract: The success of backpropagation, a foundational method in machine learning, has somewhat overshadowed the potential of biologically plausible learning. However, a prevalent threat to contemporary artificial neural networks – trained with backpropagation – is their fragility to adversarial attack, in stark contrast to human visual perception. In our experiments, we demonstrate that predictive coding networks, a biologically plausible learning approach, exhibit robustness against adversarial attacks of various forms. This finding may provide a novel perspective on enhancing the robustness of machine learning models and demonstrating the potential of further applying biologically plausible learning methods.