Let us briefly summarize our notion of finite domains and exclusions:

- They are useful even without constraint (delay!) techniques
because their backtracking-superseding
`closed' representation
leads to
- smaller proof trees (efficiency),
- abstracted, intensional answers (readability).

- We have generalized them to first-class citizens (values of logical variables and of functions, usable anonymously as arguments and inside structures, no `floundering' for non-singleton domain results).
- Their complementarity wrt unification (most general specialization) `changes signs' wrt anti-unification (least general generalization).
- Their operational semantics and interpreter implemention is given by extensions of the unification routine of LP languages (specified here via meta-unification).

The examples of this paper have indicated ways of employing
our finite domain/exclusion concept for
the compact representation of first-order knowledge.
In RELFUN, domain/exclusion terms can also be used in the operator
position, thus permitting a higher-order notation for
knowledge like
``Functions `fac`torial, `fib`onacci, or `exp`onential applied to 0 return 1'' (domain anti-unification also generalizing
operators/constructors could extract this from three
multiplied out functional clauses):

It will be instructive to observe which particular use of our domain/exclusion extension of LP is most profitable for a real-world representation task, e.g. in the areas of materials engineering [3] or calendar management (e.g. just unify two agents' restrictions, ``All dates except May 12 and 23'' and ``Only May 9-13'':dom[fac,fib,exp](0) :-& 1. % F:dom[exp,sin](0) gives 1, F=exp

An area for further theoretical work would be the extension of Herbrand models for
finite domains and, more demanding (perhaps via [9]), finite exclusions.
Concerning domain/exclusion anti-unification,
it will be interesting to see how further inductive-LP
or machine-learning methods
based on classic anti-unification may profit from the
domain/exclusion extension, using our recent
LISP implementation [5]
of the rules introduced in section 7.
On the unification side, an efficient WAM compiler/emulator
extension for our (variable-length!) finite domains and exclusions should be written,
building on the RELational/FUNctional machine [1],
FIDO III [15][7],
and FLIP [14], all in COMMON LISP:
WAM instructions for unifying constants such as
`get_constant` would need a membership/non-membership test case for `dom`/`exc` structures,
new instructions `get_dom`/`get_exc` could
unify `dom`/`exc` structures,
performing, e.g., intersection/union
for other `dom`/`exc` structures (perhaps maintaining canonically ordered elements), etc.
Also, it could be studied how our specialized
finite domains/exclusions could be fruitfully **characterized** as
a CLP()-like instance of the constraint-logic programming scheme [8], and if they could
be usefully **combined** with our RELFUN-implemented
finite-domain constraints FINDOM [13] or those in
FIDO [10],
or with concrete domains [6],
or other, more general
constraint formalisms.

Finally,
let us explore a possible non-ground extension of the treatment of solved disequations, e.g. `X` `1`, as
exclusion bindings, e.g. `X is exc[1]`,
if only to confirm that ground exclusions in fact
constitute the `local optimum' suggested by section 3:
Can we treat unsolved disequations, e.g. `X` `Y`,
as exclusion bindings with non-ground rhs's, e.g. `X is exc[Y]`
and/or `Y is exc[X]`?
Well, we could store both binding directions, but let us choose one direction, say
`X is exc[Y]`,
and put this into the substitution.
If further computation instantiates `Y` to a constant,
say `1`, perhaps via a binding chain, the disequation
reduces to a solved form, `X is exc[1]`, treated as usual. If `X` thus specializes to a constant,
`1`, we can `swap' the disequation to a solved
form,
`Y is exc[1]`, within the substitution. For an added disequation, say the unsolved
`X is exc[Z]`, the two bindings may be simplified to one, here
`X is exc[Y,Z]`.
For `Y is exc[Z]`, after swapping, they can be joined to `Y is exc[X,Z]`;
this avoids (possibly circular) instantiations like `X is exc[exc[Z]]`,
non-equivalent to `X is exc[Z]` because ``'' is not transitive.
If any variable of such a (generated) non-singleton, non-ground exclusion becomes
instantiated, this exclusion becomes partially solved, now
constraining unifiable values (e.g. `is`-lhs's).
For example, `X is exc[Y,Z], Z is 2` or `X is exc[Y,2]`
excludes the binding `X is 2`.
If such non-ground exclusions (generally, types)
can treat a larger class of constraints as bindings
directly put into the substitution, unlike constraints as delayed goals,
they will thus require very careful substitution updates and uses.