The ability to structure incoming sensory input is crucial to build an appropriate mental model of one's environment. An abundant literature has shown that, when faced with sequences of stimuli, humans can accumulate sensory evidence, learn statistical dependencies between consecutive or non-consecutive elements, or even find abstract rules. However, although each system has been tested using ad-hoc paradigms, we lack data and theory on how these abilities might articulate with each other. Here we propose an experimental framework to jointly investigate evidence accumulation and rule learning in the same visual sensory prediction task. To do so, we presented participants with sequences of 10 gratings whose orientations could or could not follow a hidden rule in the form of a 90° switch in the middle of the sequence. Participants were always asked to estimate the angle of the last (10th) element in the sequence, while the presentation stopped after 3, 5, 7 or 9 elements, forcing them to make predictions more or less deep into the future. Interestingly, participants behavior on the trials without any hidden rule could accurately predict their ability or not to find the rule in independent trials. Using computational modelling we could show that the variability of the inference parameters at the subject level, and in particular of the integration leak over the successive elements, was significantly correlated with the ability or not to discover the hidden rule. We subsequently showed that finding the rule modified the way participants made their inference, revealing a bidirectional interaction between the sensory evidence accumulation and the rule learning systems. Finally, we used recurrent neural network to show that the correlational interactions observed in humans were causally linked in neural networks suggesting a mechanistic intrication of those two cognitive processes.