Animals use knowledge of the environment to guide efficient foraging decisions. It is unclear how this knowledge is accumulated or maintained by recurrently connected circuits in the brain. In order to guide the study of how structures of knowledge are accumulated and maintained to support optimal foraging decisions, we reverse-engineered recurrent neural networks (RNNs) trained to perform dynamic foraging tasks. We highlight two variants of the task — a highly structured block-switch variant and a weakly structured random walk variant — and investigate how network dynamics are shaped by the corresponding task variant structure. Using the actor-critic reinforcement learning (RL) algorithm, we first developed RNNs that efficiently perform the two task variants. We find the RNN dynamics utilize task structure to guide efficient foraging decisions, resulting in different dynamical systems for each task variant. We then applied dynamical systems analysis and uncovered attractor structures in the RNNs that support the integration of structural priors and ongoing experience to efficiently adapt to reward dynamics. Critically, the attractors predict how agents endowed with the corresponding prior over task structure will behave in unseen but related environments. In summary, we demonstrate that actor-critic RNNs can provide hypotheses for investigating how neural dynamics are shaped by task structure, reveal how task-shaped dynamical motifs implement computation, and generate testable predictions about behavior when tested in novel but related environments. Using data from over 300 mice performing the task variants, we will examine how different training histories shape the different dynamical motifs that emerge during learning to guide behavior.