invariant tests

In a previous post I discussed the importance of behavioural unit tests and how their seemingly brittle nature is part of the evolution of test first code bases.

Having accepted the fact that my behavioural specifications are open to change because as I evolve my design and model, be it through refinements, refactoring or rework, how can I be sure that during the change of my specifications I don't change the overall behaviour of my application?

Gojko Adzic discusses this briefly in the podcast of his DDD eXchange talk entitled DDD, TDD, BDD where he talks about the synergies to be found in the x-DD practices. He explains that iterative design requires effective regression tests, but often the unit tests are bound to the design and thus must change and evolve with them.
He also infers that the often touted phrase "unit tests enable me to refactor efficiently" isn't strictly true if those specifications or unit tests can change.

I totally agree - by having malleable specifications I open myself up to the prospect whereby, if I change them enough so that the specifications are reworks rather then refactor, then I no longer have those over-arching regression tests that inform me that I have not changed currently described behaviour.

Gojko calls this the need for invariant tests - those BDD level tests that must cover the system at a higher level than unit test level specifications.

These invariant tests are often what is missing in my working practice at the moment, either through my own omission, the project or team set up not being aligned to BDD practices or the fact that this was never done up front on the project and is just very difficult to introduce at a later date.
I believe I need to give this greater importance and will be my next part of my "being new" - ensuring invariant tests are present where possible so they can support my malleable specifications.