SoP3: Guarded Commands
This post discusses the addition of guarded commands to the small imperative language we’ve been working on so far. We build on the previous article, but change the Program
grammar to include the new alternative command; likewise for the Hoare triples.
{-# OPTIONS --no-positivity-check #-}
module SoP3 where
open import MyPrelude hiding (_⨾_ ; swap ; [_]) renaming (¬_ to NOT ; _>_ to _ℕ>_)
open ≤-Reasoning using(_≤⟨_⟩_) renaming(begin_ to ≤-begin_ ; _∎ to _≤∎ ; _≡⟨_⟩_ to _=⟨_⟩_)
open import SoP2 public hiding (module Program ; Program ; ⟦_⟧_⟦_⟧ ; module ⟦_⟧_⟦_⟧)
You might’ve noticed that we’re using a dangerous OPTION
, whereas in the past we took workarounds to obtain what we needed. Positivity checking is important to avoid producing definitions without any normal form, among other things. Our brief usage of it is for semantics, below; which does not seem dangerous..
Definition
A guarded command is a term of the form B₁ ⟶ S₁ ▯ ⋯ ▯ Bₙ ⟶ Sₙ
where the Bᵢ
are conditions and the Sᵢ
are programs. Such terms are used for conditionals and loops, and so a program itself has guarded commands in its structure. Thus, the two are mutually defined data.
-- expected precedence
infix 7 _≔_ _,_≔₂_,_
infix 6 _⟶_
infixr 5 _⨾_
infixr 4 _▯_
mutual
data GuardedCommand : Set₁ where
_⟶_ : Expr 𝒫 → Program → GuardedCommand
_▯_ : GuardedCommand → GuardedCommand → GuardedCommand
data Program : Set₁ where
skip : Program
abort : Program
_⨾_ : Program → Program → Program
_≔_ : ∀ {t} → Variable t → Expr t → Program
_,_≔₂_,_ : ∀ {s t} → Variable t → Variable s → Expr t → Expr s → Program
if_fi do_od : GuardedCommand → Program
Those who realize lists as free monoids might notice that our guarded commands are (nearly) the free semigroup on condition-program pairs.
The execution behaviour of a set of guarded commands
c₁ ⟶ t₁
▯ c₂ ⟶ t₂
⋮
▯ cₙ ⟶ tₙ
is that if any conditions hold then one is nondeterministically picked and the associated statements are executed —afterwhich the loop case begins again. What if none hold? Then the selection aborts and the loop skips.
• It is to be noted that we ought to consider ▯
as commutative, i.e., the order of guarded commands is not important.
• From the evaluation method described, if we allow empty collections of guarded commands, then if fi
and do od
are just abort
and skip
, respectively.
• Both Ada and CSP use guarded commands. Of course some languages can implement variations on guarded commands: Perl example.
Examples
For example,
max-prog : Program
max-prog = if Var x ≤ Var y ⟶ z ≔ Var {𝒩} y
▯ Var y ≤ Var x ⟶ z ≔ Var {𝒩} x
fi
Here we assign z
the greater of x
or y
, and if they have the same value then it does not matter whether it gets assigned x
or y
.
For a loop example, suppose we are on a grid —say representing street intersections— and our starting position is (a,b)
and our location of interest is (x,y)
. Then there are two obvious navigations: drive/move horizontally from a
to x
then vertically from b
to y
, or perform these operations the other way around. A less obvious algorithm: randomly move one horizontal or vertical step in the direction of (x,y)
until you’ve reached the correct row or column, then head straight to (x,y)
. Notice that this latter algorithm, while clearly correct, does not produce the same path each time due to the randomness for the step directions. In the language, it might look like
row, col ≔ a,b;
add (row, col) to path sequence;
do row ≠ x → row ≔ row + 1 ; add (row, col) to path sequence
▯ col ≠ y → col ≔ col + 1 ; add (row, col) to path sequence
od
Again, note that this is not a ‘(traditional) mathematical function’ since for the same inputs it may yield different output sequences!
Folding
Guarded command lists, as remarked earlier, are similar to arbitrary lists but do not have a unit for concatenation and so constitute what is known as a semigroup. Anyhow, similar to lists, they come equipped with a fold function:
-- gcfold(B₁ ⟶ S₁ ▯ ⋯ ▯ Bₙ ⟶ Sₙ) = (B₁ ⋆ S₁) ⊕ ⋯ ⊕ (Bₙ ⋆ Sₙ)
gcfold : ∀ {ℓ} {A : Set ℓ} (_⋆_ : Expr 𝒫 → Program → A) (_⊕_ : A → A → A)
→ GuardedCommand → A
gcfold _⋆_ _⊕_ (B ⟶ S) = B ⋆ S
gcfold _⋆_ _⊕_ (l ▯ r) = gcfold _⋆_ _⊕_ l ⊕ gcfold _⋆_ _⊕_ r
That is, replace all arrows ⟶ with stars ⋆ and all separators ▯ with opluses ⊕ after the arguments have been transformed.
Note that you’ll get wonky things if ⊕ is not associative, due to the parenthesization; so that’s something the user of gcfold
needs to be aware of. Indeed, the ambient assumption is that (A, ⊕)
forms a semigroup, ⊕ is an associative operation on A
, and so gcfold
turns the semigroup of guarded commands into any other semigroup.
Some examples,
-- ∨-guards(B₁ ⟶ S₁ ▯ ⋯ ▯ Bₙ ⟶ Sₙ) = B₁ ∨ ⋯ ∨ Bₙ
∨-guards : (Bᵢ⟶Sᵢ : GuardedCommand) → Expr 𝒫
∨-guards = gcfold (λ B S → B) _∨_
-- ∧¬-guards(B₁ ⟶ S₁ ▯ ⋯ ▯ Bₙ ⟶ Sₙ) = ¬ B₁ ∧ ⋯ ∧ ¬ Bₙ
∧¬-guards : GuardedCommand → Expr 𝒫
∧¬-guards = gcfold (λ B S → ¬ B) _∧_
-- This' just classically applying DeMorgan to ¬(∨-guards ...)
-- given a collection of guarded commands, we want to easily state propositions about all of them
--
-- gc-prop(B₁ ⟶ S₁ ▯ ⋯ ▯ Bₙ ⟶ Sₙ) = p(B₁,S₁) ∧ ⋯ ∧ p(Bₙ,Sₙ)
gc-prop : ∀ {ℓ} (p : Expr 𝒫 × Program → Set ℓ) → GuardedCommand → Set ℓ
gc-prop p = gcfold (λ A B → p (A , B)) _×_
syntax gc-prop (λ ab → c) gc = all ab ∈ gc have c
Note that the above satisfy the aforementioned semigroup proviso in that _∧_ , _∨_, _×_
are essentially (classically) associative.
Semantics
The semantics remain unchanged except with the addition of two new constructors.
Given Q
, execution of if B₁ ⟶ S₁ ▯ ⋯ ▯ Bₙ ⟶ Sₙ fi
establishes R
precisely when Q
makes the selection well-defined in that Q ⇒ B₁ ∨ ⋯ ∨ Bₙ
and that together with the guard, each body establishes R
:
alternative : ∀ {Q R Bᵢ⟶Sᵢ}
→ Q ⇒ ∨-guards Bᵢ⟶Sᵢ
→ all BS ∈ Bᵢ⟶Sᵢ have ⟦ Q ∧ proj₁ BS ⟧ proj₂ BS ⟦ R ⟧
→ ⟦ Q ⟧ if Bᵢ⟶Sᵢ fi ⟦ R ⟧
Given Q
, execution of do B₁ ⟶ S₁ ▯ ⋯ ▯ Bₙ ⟶ Sₙ od
establishes R
precisely when it terminates and there’s an ‘invariant’ that together with the negation of the guards establishes the post-condition R
:
iterative : ∀ {Q P R Bᵢ⟶Sᵢ}
-- ### partial correctness ###
→
-- invariant is initially true
Q ⇒ P
→
-- invariant is maintained by (each) body
all BS ∈ Bᵢ⟶Sᵢ have ⟦ P ∧ proj₁ BS ⟧ proj₂ BS ⟦ P ⟧
→
-- invariant and negation of loop guard(s) establish post-condition
P ∧ (∧¬-guards Bᵢ⟶Sᵢ) ⇒ R
-- ### termination ###
→
-- an integer function of the program variables
(bf : Expr 𝒩)
→
-- while the loop runs, the bound function is positive
P ∧ ∨-guards Bᵢ⟶Sᵢ ⇒ bf > 𝒩 0
→
-- each loop iteration decreases the bound function
all BS ∈ Bᵢ⟶Sᵢ have (∀ c → ⟦ P ∧ proj₁ BS ∧ bf ≈ 𝒩 c ⟧ proj₂ BS ⟦ bf < 𝒩 c ⟧)
→ ⟦ Q ⟧ do Bᵢ⟶Sᵢ od ⟦ R ⟧
Gries’ text defines the semantics of programs via ‘weakest preconditions’ and that’s cool, but then there’s a snag when one gets to loops and an auxiliary checklist-theorem is proved and used in practice, which is just the above constructor for us.
We also introduce two new syntactic sugarings,
syntax alternative {Q} {R} {Bᵢ⟶Sᵢ} Q⇒∃ᵢBᵢ ∀ᵢ⟦Q∧Bᵢ⟧Sᵢ⟦R⟧ = ⟦ Q ⟧⟨ Q⇒∃ᵢBᵢ ⟩if Bᵢ⟶Sᵢ fi⟨ ∀ᵢ⟦Q∧Bᵢ⟧Sᵢ⟦R⟧ ⟩⟦ R ⟧
syntax iterative {Q} {P} {R} Q⇒P ∀ᵢ⟦P∧Bᵢ⟧Sᵢ⟦P⟧ P∧¬BB⇒R bf P∧BB⇒bf>0 bf-dec
= ⟦ Q ⟧⟨ Q⇒P ⟩⟦Invariant: P ,bound: bf ⟧do-maintenance ∀ᵢ⟦P∧Bᵢ⟧Sᵢ⟦P⟧ od⟨ P∧BB⇒bf>0 , bf-dec ⟩-since P∧¬BB⇒R ⟦ R ⟧
An example usage for the alternative,
R : Expr 𝒫
R = (Var z ≈ Var x ∨ Var z ≈ Var y) ∧ Var x ≤ Var z ∧ Var y ≤ Var z
max : ⟦ True ⟧ max-prog ⟦ R ⟧
max =
⟦ True ⟧⟨ ⊤⇒∃ᵢBᵢ
⟩if Var x ≤ Var y ⟶ z ≔ Var {𝒩} y
▯ Var y ≤ Var x ⟶ z ≔ Var {𝒩} x
fi⟨ -- proving ∀ᵢ⟦Q∧Bᵢ⟧Sᵢ⟦R⟧
⟦ True ∧ Var x ≤ Var y ⟧ z ≔ Var {𝒩} y since fst-help ⟦ R ⟧
, ⟦ True ∧ Var y ≤ Var x ⟧ z ≔ Var {𝒩} x since snd-help ⟦ R ⟧
⟩⟦ R ⟧
where
⊤⇒∃ᵢBᵢ : True ⇒ Var x ≤ Var y ∨ Var y ≤ Var x
⊤⇒∃ᵢBᵢ = λ _ _ → ℕ-total _ _ -- where ℕ-total: ∀ m n → m ≤ℕ n ⊎ n ≤ℕ m
-- i.e.: ⊤ ∧ x ≤ y ⇒ R [ Var {𝒩} y / z ]
fst-help : True ∧ Var x ≤ Var y ⇒
(Var y ≈ Var x ∨ Var y ≈ Var y) ∧ Var x ≤ Var y ∧ Var y ≤ Var y
fst-help σ (tt , σx≤σy) = inj₂ ≡-refl , σx≤σy , ≤ℕ-refl
snd-help : True ∧ Var y ≤ Var x ⇒
(Var x ≈ Var x ∨ Var x ≈ Var y) ∧ Var x ≤ Var x ∧ Var y ≤ Var x
snd-help σ (tt , σy≤σx) = inj₁ ≡-refl , ≤ℕ-refl , σy≤σx
Note that an alternative proof is
max = alternative ⊤⇒∃ᵢBᵢ (assgn fst-help , assgn snd-help)
where assgn : ∀ {t Q v R} {e : Expr t} → Q ⇒ R [ e / v ] → ⟦ Q ⟧ v ≔ e ⟦ R ⟧`
assgn = {! easy exercise !}
However the syntax-variant presented earlier is preferred as it is closer to an annotated program. Both are valid and it depends on ones purposes: readability or just proof.
Euclidean Algorithm
We now present as an example the correctness of a greatest common divisor algorithm.
Since it is not our goal to produce a number theory library, we will simply postulate the facts that we need. In particular, some of these are consequence of variations on items in Data.Nat.Properties
, such as _+-mono_
and m+n∸m≡n
.
postulate arithmetic : ∀ {m n} → m <ℕ n → 0 <ℕ n ∸ m
postulate negation : ∀ {m n} → NOT (m <ℕ n) → n ≤ℕ m
postulate ≤ℕ-antisym : ∀{m n} → m ≤ℕ n → n ≤ℕ m → m ≡ n
postulate pos+ : ∀ {m n} → 0 <ℕ m → 0 <ℕ n → 0 <ℕ (m +ℕ n)
postulate m>n⇒m-n+n≡m : ∀ {m n} → n <ℕ m → (m ∸ n) +ℕ n ≡ m
postulate _⟨≡≤⟩_ : ∀ {m n k} → m ≡ k → k ≤ℕ n → m ≤ℕ n
We will now postulate the properties of gcd that we need to carry out the correctness proof. In particular, since no definition of gcd is given at all but only certain constraints placed upon it, the algorithm will work for any such binary function on natural numbers! Anyhow, the postulates
postulate _GCD_ : ℕ → ℕ → ℕ
postulate GCD-sym : ∀ {m n} → m GCD n ≡ n GCD m
postulate GCD-idemp : ∀ {n} → n GCD n ≡ n
postulate GCD-dec : ∀ {m n} → m GCD n ≡ (m ∸ n) GCD n
_gcd_ = unreify-ℕ₂ _GCD_
Notice that we used unreify to make a syntactic counterpart of the gcd operation.
Let us now prove the correctness of the usual gcd algorithm
gcd-prog :
That is, for any
(X Y : ℕ) →
we have
⟦ 𝒩 X > 𝒩 0 ∧ 𝒩 Y > 𝒩 0 ⟧
x ≔ 𝒩 X ⨾ y ≔ 𝒩 Y ⨾
do Var x > Var y ⟶ x ≔ Var x - Var y
▯ Var y > Var x ⟶ y ≔ Var y - Var x
od
⟦ Var x ≈ 𝒩(X GCD Y) ⟧
Notice how it is a bit like the previous loop example: our initial position is (X,Y)
and our target location is any (x,y)
with x ≈ y
. Dijkstra elaborates on this ‘game’ more in the zero-th chapter of his text A Disciple of Programming.
We wont go into the details, but we are following a paper by Dijkstra,
gcd-prog X Y =
let
who takes as loop invaraint
P : Expr 𝒫
P = 𝒩(X GCD Y) ≈ Var x gcd Var y ∧ Var x > 𝒩 0 ∧ Var y > 𝒩 0
For this to be a valid invaraint we must have that it is true before the loop even begins. That is, we need to show
⟦ 𝒩 X > 𝒩 0 ∧ 𝒩 Y > 𝒩 0 ⟧ x ≔ 𝒩 X ⨾ y ≔ 𝒩 Y ⟦ P ⟧
Which is easily proven by two accounts of the assignment rule. Formally,
asgn-lemma : 𝒩 X > 𝒩 0 ∧ 𝒩 Y > 𝒩 0
⇒ 𝒩(X GCD Y) ≈ 𝒩 X gcd 𝒩 Y ∧ 𝒩 X > 𝒩 0 ∧ 𝒩 Y > 𝒩 0
asgn-lemma = λ _ gvn → let (x>0 , y>0) = gvn in ≡-refl , x>0 , y>0
Moreover, to be a valid invariant it must not vary with each loop body; i.e., the loop bodies must maintain its truth. For the first body, we must show
⟦ P ∧ Var x > Var y ⟧ x ≔ Var x - Var y ⟦ P ⟧
which is tantamount, by the assignment rule, to proving
mnt₁h : P ∧ Var x > Var y
⇒ 𝒩(X GCD Y) ≈ (Var x - Var y) gcd Var y ∧ (Var x - Var y) > 𝒩 0 ∧ Var y > 𝒩 0
mnt₁h =
λ _ gvn → let (XgcdY≈xgcdy , x>0 , y>0), x>y = gvn in
XgcdY≈xgcdy ⟨≡≡⟩ GCD-dec , arithmetic x>y , y>0
Where we have unfolded the expression P [ (Var x - Var y) / x]
to guide the proof.
Likewise for the second body,
mnt₂h : P ∧ Var y > Var x
⇒ 𝒩(X GCD Y) ≈ Var x gcd (Var y - Var x) ∧ Var x > 𝒩 0 ∧ (Var y - Var x) > 𝒩 0
mnt₂h = λ σ gvn →
let (XgcdY≈xgcdy , x>0 , y>0), y>x = gvn
𝓍 , 𝓎 = lower (σ x) , lower (σ y)
in
(begin X GCD Y
≡⟨ XgcdY≈xgcdy ⟩
𝓍 GCD 𝓎
≡⟨ GCD-sym ⟩
𝓎 GCD 𝓍
≡⟨ GCD-dec ⟩
(𝓎 ∸ 𝓍) GCD 𝓍
≡⟨ GCD-sym ⟩
𝓍 GCD (𝓎 ∸ 𝓍) ∎) , x>0 , arithmetic y>x
Observe that the calculational could have been rendered succinctly as (XgcdY≈xgcdy ⟨≡≡⟩ GCD-sym) ⟨≡≡⟩ (GCD-dec ⟨≡≡⟩ GCD-sym)
, but this has the downside of being just a bit less humanly-consumable.
If the loop terminates, then none of the guards hold anymore and that along with the invariant ought to establish the post-condition:
P∧¬BB⇒R : P ∧ ¬ Var x > Var y ∧ ¬ Var y > Var x ⇒ Var x ≈ 𝒩(X GCD Y)
P∧¬BB⇒R = λ σ gvn →
let (XgcdY≈xgcdy , x>0 , y>0), ¬x>y , ¬y>x = gvn
𝓍 , 𝓎 = lower (σ x) , lower (σ y)
in
begin 𝓍
≡⟨ ≡-sym GCD-idemp ⟩
𝓍 GCD 𝓍
≡⟨ ≡-cong (λ e → 𝓍 GCD e) (≤ℕ-antisym (negation ¬x>y) (negation ¬y>x)) ⟩
𝓍 GCD 𝓎
≡⟨ ≡-sym XgcdY≈xgcdy ⟩
X GCD Y
∎
Sweet! However, we’ve only proven partial correctness, it remains to show that the loop actually terminates. Since one loop body decreases x
and the other decreases y
, we have that altogether the expression x + y
is decreased each iteration and so we choose that as our bound function.
bf : Expr 𝒩
bf = Var x + Var y
Indeed, it is initially a positive integer:
bf>0 : P ∧ (Var x > Var y ∨ Var y > Var x) ⇒ bf > 𝒩 0
bf>0 = λ _ gvn → let (_ , x>0 , y>0), _ = gvn in pos+ x>0 y>0
and is decreased by each body. For the first body,
dec₁h : ∀ c → P ∧ Var x > Var y ∧ bf ≈ 𝒩 c ⇒ (Var x - Var y) + Var y < 𝒩 c
dec₁h = λ c σ gvn →
let (XgcdY≈ygcdy , x>0 , y>0), x>y , x+y≈c = gvn
𝓍 , 𝓎 = lower (σ x) , lower (σ y)
in ≤-begin suc ((𝓍 ∸ 𝓎) +ℕ 𝓎)
=⟨ ≡-cong suc (m>n⇒m-n+n≡m x>y) ⟩
suc 𝓍
=⟨ definition {- of addition -} ⟩
suc zero +ℕ 𝓍
=⟨ +-comm 1 𝓍 ⟩
𝓍 +ℕ suc zero
≤⟨ ≤ℕ-refl {𝓍} +-mono y>0 ⟩
𝓍 +ℕ 𝓎
=⟨ x+y≈c ⟩
c
≤∎
dec₁ : (c : ℕ) → ⟦ P ∧ Var x > Var y ∧ bf ≈ 𝒩 c ⟧ x ≔ Var x - Var y ⟦ bf < 𝒩 c ⟧
dec₁ = assgn ∘ dec₁h
and for the second,
dec₂h : ∀ c → P ∧ Var y > Var x ∧ bf ≈ 𝒩 c ⇒ Var x + (Var y - Var x) < 𝒩 c
dec₂h = λ c σ gvn →
let
(XgcdY≈ygcdy , x>0 , y>0), y>x , x+y≈c = gvn
𝓍 , 𝓎 = lower (σ x) , lower (σ y)
in ≤-begin suc( 𝓍 +ℕ (𝓎 ∸ 𝓍) )
=⟨ ≡-cong suc (+-comm 𝓍 (𝓎 ∸ 𝓍)) ⟩
suc( (𝓎 ∸ 𝓍) +ℕ 𝓍 )
=⟨ ≡-cong suc (m>n⇒m-n+n≡m y>x) ⟩
suc 𝓎
=⟨ definition {- of addition -} ⟩
suc zero +ℕ 𝓎
≤⟨ x>0 +-mono ≤ℕ-refl {𝓎} ⟩
𝓍 +ℕ 𝓎
=⟨ x+y≈c ⟩
c
≤∎
dec₂ : ∀ c → ⟦ P ∧ Var y > Var x ∧ bf ≈ 𝒩 c ⟧ y ≔ Var y - Var x ⟦ bf < 𝒩 c ⟧
dec₂ = assgn ∘ dec₂h
in ⟦ 𝒩 X > 𝒩 0 ∧ 𝒩 Y > 𝒩 0 ⟧
x ≔ 𝒩 X since asgn-lemma
⟦ P [ 𝒩 Y / y ] ⟧
⨾⟨ ⇒-refl ⟩
⟦ P [ 𝒩 Y / y ] ⟧
y ≔ 𝒩 Y since ⇒-refl
⟦ P ⟧
⨾⟨ ⇒-refl ⟩
⟦ P ⟧⟨ ⇒-refl
⟩⟦Invariant: P ,bound: bf
⟧do-maintenance
⟦ P ∧ Var x > Var y ⟧ x ≔ Var x - Var y since mnt₁h ⟦ P ⟧
, ⟦ P ∧ Var y > Var x ⟧ y ≔ Var y - Var x since mnt₂h ⟦ P ⟧
od⟨ bf>0 , (dec₁ , dec₂) ⟩-since P∧¬BB⇒R
⟦ Var x ≈ 𝒩(X GCD Y) ⟧
Observe that the code and proofs for the loop could have been rendered as
iterative {P} {P} (λ σ z₁ → z₁) (mnt₁ , mnt₂) P∧¬BB⇒R bf bf>0 (dec₁ , dec₂)
Anyhow, that’s pretty neat: we’ve shown the gcd algorithm is correct using a ubiquitous subset of an imperative language. Moreover, our proof is quite close to that of paper and pencil, but with all the details spelled out. When there is too much detail, it can be proven, hidden from appearing in the final draft, and then indicated as an exercise for the reader. Then we know that the presentation is valid, but not all gory details are shown.
Hope you’ve had as much fun as I!