Probably; more or less. Some caution advised.
Retrospective knowledge is difficult. We don’t necessarily know what we’re doing, even when we’re good at doing it, and what we say we’re doing may not be what we’re actually doing.
In the late ’70’s a team at Stanford built an AI system with a professor from a different field who was so expert in his subject matter he’d written the standard reference text. The computer scientists built the system to do exactly as he told them. When they ran it on real problems, though, it sucked. After a careful review to make sure they hadn’t screwed up what he’d said (they hadn’t), they did a ‘protocol analysis,’ which is basically watching meticulously as he actually did the core task (at which he was deservedly a world-renowned expert). He wasn’t doing what he said, even though he sincerely believed he was. The computer scientists discovered that he was integrating information he was unaware of noticing and that critically changed the outcomes.
As with the Stanford professor, what our authors say to do is what they were able to express in words as being effective for themselves and for their students, but neither we nor they have any certainty about what they really did.
I still trust ‘em. Just while respecting the epistemological limits of introspection. Little of what goes on in our minds is available to consciousness, and lots of areas of our minds lie to consciousness to get it to go away and not bother them.