Let us briefly summarize our notion of finite domains and exclusions:
The examples of this paper have indicated ways of employing our finite domain/exclusion concept for the compact representation of first-order knowledge. In RELFUN, domain/exclusion terms can also be used in the operator position, thus permitting a higher-order notation for knowledge like ``Functions factorial, fibonacci, or exponential applied to 0 return 1'' (domain anti-unification also generalizing operators/constructors could extract this from three multiplied out functional clauses):
dom[fac,fib,exp](0) :-& 1. % F:dom[exp,sin](0) gives 1, F=expIt will be instructive to observe which particular use of our domain/exclusion extension of LP is most profitable for a real-world representation task, e.g. in the areas of materials engineering  or calendar management (e.g. just unify two agents' restrictions, ``All dates except May 12 and 23'' and ``Only May 9-13'': exc[12-may,23-may] is dom[9-may,...,13-may]).
An area for further theoretical work would be the extension of Herbrand models for finite domains and, more demanding (perhaps via ), finite exclusions. Concerning domain/exclusion anti-unification, it will be interesting to see how further inductive-LP or machine-learning methods based on classic anti-unification may profit from the domain/exclusion extension, using our recent LISP implementation  of the rules introduced in section 7. On the unification side, an efficient WAM compiler/emulator extension for our (variable-length!) finite domains and exclusions should be written, building on the RELational/FUNctional machine , FIDO III , and FLIP , all in COMMON LISP: WAM instructions for unifying constants such as get_constant would need a membership/non-membership test case for dom/exc structures, new instructions get_dom/get_exc could unify dom/exc structures, performing, e.g., intersection/union for other dom/exc structures (perhaps maintaining canonically ordered elements), etc. Also, it could be studied how our specialized finite domains/exclusions could be fruitfully characterized as a CLP()-like instance of the constraint-logic programming scheme , and if they could be usefully combined with our RELFUN-implemented finite-domain constraints FINDOM  or those in FIDO , or with concrete domains , or other, more general constraint formalisms.
Finally, let us explore a possible non-ground extension of the treatment of solved disequations, e.g. X 1, as exclusion bindings, e.g. X is exc, if only to confirm that ground exclusions in fact constitute the `local optimum' suggested by section 3: Can we treat unsolved disequations, e.g. X Y, as exclusion bindings with non-ground rhs's, e.g. X is exc[Y] and/or Y is exc[X]? Well, we could store both binding directions, but let us choose one direction, say X is exc[Y], and put this into the substitution. If further computation instantiates Y to a constant, say 1, perhaps via a binding chain, the disequation reduces to a solved form, X is exc, treated as usual. If X thus specializes to a constant, 1, we can `swap' the disequation to a solved form, Y is exc, within the substitution. For an added disequation, say the unsolved X is exc[Z], the two bindings may be simplified to one, here X is exc[Y,Z]. For Y is exc[Z], after swapping, they can be joined to Y is exc[X,Z]; this avoids (possibly circular) instantiations like X is exc[exc[Z]], non-equivalent to X is exc[Z] because ``'' is not transitive. If any variable of such a (generated) non-singleton, non-ground exclusion becomes instantiated, this exclusion becomes partially solved, now constraining unifiable values (e.g. is-lhs's). For example, X is exc[Y,Z], Z is 2 or X is exc[Y,2] excludes the binding X is 2. If such non-ground exclusions (generally, types) can treat a larger class of constraints as bindings directly put into the substitution, unlike constraints as delayed goals, they will thus require very careful substitution updates and uses.