Caveats

There are a few things to keep in mind when using python-nnf.

Node duplication

If the same node occurs multiple times in a sentence, then it often pays to make sure that it isn’t created multiple times.

Here’s a (contrived) example of two ways to construct the same sentence:

>>> inefficient = And({
...     Or({A, B}),
...     And({A, Or({A, B})}),
... })
>>> dup_node = Or({A, B})
>>> efficient = And({
...     dup_node,
...     And({A, dup_node}),
... })

These objects behave identically, but the first one stores the node Or({A, B}) twice, and the other stores it only once. That means the second one uses less memory.

For a lot of sentences, this isn’t worth worrying about. But if you have many nodes that occur multiple times, and they descend from nodes that occur multiple times, you may end up using a lot more memory than necessary.

The .object_count() and .deduplicate() methods exist to diagnose this problem. .object_count() tells you how many actual objects are used to represent the sentence, and .deduplicate() returns a maximally compact copy.

If .deduplicate() changes the value of .object_count() a lot then the sentence could benefit from watching out not to create objects multiple times.

>>> inefficient.object_count()
6
>>> inefficient.deduplicate().object_count()
5

In this case the difference is pretty small.

Decomposability and determinism

A lot of methods are much faster to perform on sentences that are decomposable or deterministic, such as model enumeration.

Decomposability is automatically detected.

Determinism is too expensive to automatically detect, but it can give a huge speedup. If you know a sentence to be deterministic, run .mark_deterministic() to enable the relevant optimizations.

A compiler like DSHARP may be able to convert some sentences into equivalent deterministic decomposable sentences. The output of DSHARP can be loaded using the nnf.dsharp module. Sentences returned by nnf.dsharp.compile() are automatically marked as deterministic.

Other duplication inefficiencies

Even when properly deduplicated, the kind of sentence that’s vulnerable to node duplication might still be inefficient to work with for some operations.

A known offender is equality (==). Currently, if two of such sentences are compared that are equal but don’t share any objects, it takes a very long time even if both sentences don’t have any duplication within themselves.