Real-time agent-centered heuristic search is a well-studied problem where an agent that can only reason locally about the world must travel to a goal location using bounded computation and memory at each step. Many algorithms have been proposed for this problem, and theoretical results have also been derived for the worst-case performance. Assuming sufficiently poor tie-breaking, among other conditions, we derive theoretical best-case bounds for reaching the goal using LRTA*, a canonical example of a real-time agent-centered heuristic search algorithm. We show that the number of steps required to reach the goal can grow asymptotically faster than the state space, resulting in a “scrubbing” when the agent repeatedly visits the same state. This theoretical result, supported by experimental data, encourages recent work in the field that uses novel tie-breaking schemas and/or perform different types of learning.