In mathematics and computer science, a matroid oracle is a subroutine through which an algorithm may access a matroid, an abstract combinatorial structure that can be used to describe the linear dependencies between vectors in a vector space or the spanning trees of a graph, among other applications.

The most commonly used oracle of this type is an independence oracle, a subroutine for testing whether a set of matroid elements is independent. Several other types of oracle have also been used; some of them have been shown to be weaker than independence oracles, some stronger, and some equivalent in computational power.[1]

Many algorithms that perform computations on matroids have been designed to take an oracle as input, allowing them to run efficiently without change on many different kinds of matroids, and without additional assumptions about what kind of matroid they are using. For instance, given an independence oracle for any matroid, it is possible to find the minimum weight basis of the matroid by applying a greedy algorithm that adds elements to the basis in sorted order by weight, using the independence oracle to test whether each element can be added.[2]

In computational complexity theory, the oracle model has led to unconditional lower bounds proving that certain matroid problems cannot be solved in polynomial time, without invoking unproved assumptions such as the assumption that P ≠ NP. Problems that have been shown to be hard in this way include testing whether a matroid is binary or uniform, or testing whether it contains certain fixed minors.[3]

Why oracles?

edit

Although some authors have experimented with computer representations of matroids that explicitly list all independent sets or all basis sets of the matroid,[4] these representations are not succinct: a matroid with   elements may expand into a representation that takes space exponential in  . Indeed, the number of distinct matroids on   elements grows doubly exponentially as

 [5]

from which it follows that any explicit representation capable of handling all possible matroids would necessarily use exponential space.[6]

Instead, different types of matroids may be represented more efficiently from the other structures from which they are defined: uniform matroids from their two numeric parameters, graphic matroids, bicircular matroids, and gammoids from graphs, linear matroids from matrices, etc. However, an algorithm for performing computations on arbitrary matroids needs a uniform method of accessing its argument, rather than having to be redesigned for each of these matroid classes. The oracle model provides a convenient way of codifying and classifying the kinds of access that an algorithm might need.

History

edit

Starting with Rado (1942), "independence functions" or " -functions" have been studied as one of many equivalent ways of axiomatizing matroids. An independence function maps a set of matroid elements to the number   if the set is independent or   if it is dependent; that is, it is the indicator function of the family of independent sets, essentially the same thing as an independence oracle.[7]

Matroid oracles have also been part of the earliest algorithmic work on matroids. Thus, Edmonds (1965), in studying matroid partition problems, assumed that the access to the given matroid was through a subroutine that takes as input an independent set   and an element  , and either returns a circuit in   (necessarily unique and containing  , if it exists) or determines that no such circuit exists. Edmonds (1971) used a subroutine that tests whether a given set is independent (that is, in more modern terminology, an independence oracle), and observed that the information it provides is sufficient to find the minimum weight basis in polynomial time.

Beginning from the work of Korte & Hausmann (1978) and Hausmann & Korte (1978), researchers began studying oracles from the point of view of proving lower bounds on algorithms for matroids and related structures. These two papers by Hausmann and Korte both concerned the problem of finding a maximum cardinality independent set, which is easy for matroids but (as they showed) harder to approximate or compute exactly for more general independence systems represented by an independence oracle. This work kicked off a flurry of papers in the late 1970s and early 1980s showing similar hardness results for problems on matroids[8] and comparing the power of different kinds of matroid oracles.[9]

Since that time, the independence oracle has become standard for most research on matroid algorithms.[10] There has also been continued research on lower bounds,[11] and comparisons of different types of oracle.[12]

Types of oracles

edit

The following types of matroid oracles have been considered.

  • An independence oracle takes as its input a set of matroid elements, and returns as output a Boolean value, true if the given set is independent and false otherwise.[13] It may be implemented easily based on the underlying structure from which the matroid was defined for graphic matroids, transversal matroids, gammoids, and linear matroids, and for matroids formed from these by standard operations such as direct sums.[3]
  • A basis oracle takes as its input a set of matroid elements, and returns as output a Boolean value, true if the given set is a basis and false otherwise.[9]
  • A circuit oracle takes as its input a set of matroid elements, and returns as output a Boolean value, true if the given set is a circuit and false otherwise.[9]
  • The circuit-finding oracle of Edmonds (1965) takes as input an independent set and an additional element, and either determines that their union is independent, or finds a circuit in the union and returns it.
  • A rank oracle takes as its input a set of matroid elements, and returns as its output a numerical value, the rank of the given set.[9]
  • Three types of closure oracle have been considered: one that tests if a given element belongs to the closure of a given set, a second one that returns the closure of the set, and a third one that tests whether a given set is closed.[9]
  • A spanning oracle takes as its input a set of matroid elements, and returns as output a Boolean value, true if the given set is spanning (i.e. contains a basis and has the same rank as the whole matroid) and false otherwise.[14]
  • A girth oracle takes as its input a set of matroid elements, and returns as its output a numerical value, the size of the smallest circuit within that set (or   if the given set is independent).[14]
  • A port oracle for a fixed element   of the matroid takes as its input a set of matroid elements, and returns as output a Boolean value, true if the given set contains a circuit that includes   and false otherwise.[15]

Relative power of different oracles

edit

Although there are many known types of oracles, the choice of which to use can be simplified, because many of them are equivalent in computational power. An oracle   is said to be polynomially reducible to another oracle   if any call to   may be simulated by an algorithm that accesses the matroid using only oracle   and takes polynomial time as measured in terms of the number of elements of the matroid; in complexity-theoretic terms, this is a Turing reduction. Two oracles are said to be polynomially equivalent if they are polynomially reducible to each other. If   and   are polynomially equivalent, then every result that proves the existence or nonexistence of a polynomial time algorithm for a matroid problem using oracle   also proves the same thing for oracle  .

For instance, the independence oracle is polynomially equivalent to the circuit-finding oracle of Edmonds (1965). If a circuit-finding oracle is available, a set may be tested for independence using at most   calls to the oracle by starting from an empty set, adding elements of the given set one element at a time, and using the circuit-finding oracle to test whether each addition preserves the independence of the set that has been constructed so far. In the other direction, if an independence oracle is available, the circuit in a set   may be found using at most   calls to the oracle by testing, for each element  , whether   is independent and returning the elements for which the answer is no. The independence oracle is also polynomially equivalent to the rank oracle, the spanning oracle, the first two types of closure oracle, and the port oracle.[1]

The basis oracle, the circuit oracle, and the oracle that tests whether a given set is closed are all weaker than the independence oracle: they can be simulated in polynomial time by an algorithm that accesses the matroid using an independence oracle, but not vice versa. Additionally, none of these three oracles can simulate each other within polynomial time. The girth oracle is stronger than the independence oracle, in the same sense.[9]

As well as polynomial time Turing reductions, other types of reducibility have been considered as well. In particular, Karp, Upfal & Wigderson (1988) showed that, in parallel algorithms, the rank and independence oracles are significantly different in computational power. The rank oracle allows the construction of a minimum weight basis by   simultaneous queries, of the prefixes of the sorted order of the matroid elements: an element belongs to the optimal basis if and only if the rank of its prefix differs from the rank of the previous prefix. In contrast, finding a minimum basis with an independence oracle is much slower: it can be solved deterministically in   time steps, and there is a lower bound of   even for randomized parallel algorithms.

Algorithms

edit

Many problems on matroids are known to be solvable in polynomial time, by algorithms that access the matroid only through an independence oracle or another oracle of equivalent power, without need of any additional assumptions about what kind of matroid has been given to them. These polynomially-solvable problems include:

  • Finding a minimum or maximum weight basis of a weighted matroid, using a greedy algorithm.[2]
  • Partitioning the elements of a matroid into a minimum number of independent sets, and finding the largest set that is simultaneously independent in two given matroids. The latter problem is called matroid intersection, and the solutions to both problems are closely related to each other.[16]
  • Testing whether a matroid is  -connected (in the sense of Tutte 1966) for  .[17]
  • Testing whether a given matroid is graphic[18] or regular.[19]
  • Finding an ear decomposition of a given matroid, a sequence of circuits whose union is the matroid and in which each circuit remains a circuit after all previous circuits in the sequence are contracted. Such a decomposition may also be found with the additional property that a chosen matroid element belongs to every circuit.[15]
  • Finding a branch-decomposition of a given matroid, whenever its branch-width is no more than a fixed constant.[20]
  • Listing all of the bases, flats, or circuits of a matroid, in polynomial time per output set.[21]
  • Approximating the number of bases by a fully polynomial-time randomized approximation scheme, for a matroid with   elements and rank  , with the additional assumption that the number of bases is within a polynomial factor of the number of  -element sets.[22]

Impossibility proofs

edit

For many matroid problems, it is possible to show that an independence oracle does not provide enough power to allow the problem to be solved in polynomial time. The main idea of these proofs is to find two matroids   and   on which the answer to the problem differs and which are difficult for an algorithm to tell apart. In particular, if   has a high degree of symmetry, and differs from   only in the answers to a small number of queries, then it may take a very large number of queries for an algorithm to be sure of distinguishing an input of type   from an input formed by using one of the symmetries of   to permute  .[3]

A simple example of this approach can be used to show that it is difficult to test whether a matroid is uniform. For simplicity of exposition, let   be even, let   be the uniform matroid  , and let   be a matroid formed from   by making a single one of the  -element basis sets of   dependent instead of independent. In order for an algorithm to correctly test whether its input is uniform, it must be able to distinguish   from every possible permutation of  . But in order for a deterministic algorithm to do so, it must test every one of the  -element subsets of the elements: if it missed one set, it could be fooled by an oracle that chose that same set as the one to make dependent. Therefore, testing for whether a matroid is uniform may require

 

independence queries, much higher than polynomial. Even a randomized algorithm must make nearly as many queries in order to be confident of distinguishing these two matroids.[23]

Jensen & Korte (1982) formalize this approach by proving that, whenever there exist two matroids   and   on the same set of elements but with differing problem answers, an algorithm that correctly solves the given problem on those elements must use at least

 

queries, where   denotes the automorphism group of  ,   denotes the family of sets whose independence differs from   to  , and   denotes the subgroup of automorphisms that maps   to itself. For instance, the automorphism group of the uniform matroid is just the symmetric group, with size  , and in the problem of testing uniform matroids there was only one set   with  , smaller by an exponential factor than  .[24]

Problems that have been proven to be impossible for a matroid oracle algorithm to compute in polynomial time include:

  • Testing whether a given matroid is uniform.[23]
  • Testing whether a given matroid contains a fixed matroid   as a minor, except in the special cases that   is uniform with rank or corank at most one. More generally, if   is a fixed finite set of matroids, and there is no uniform matroid in  , then it is not possible to test in polynomial time whether a given matroid contains one or more of the matroids in   as a minor.[25]
  • Testing whether a given matroid is binary, is representable over any particular fixed field, or whether there exists a field over which it is representable.[26]
  • Solving the matroid matching problem, in which the input is a graph and a matroid on its vertices, and the goal is to find a matching in the graph that is as large as possible, subject to the constraint that the matched vertices form an independent set.[27]
  • Testing whether a given matroid is self-dual, transversal, bipartite, Eulerian, or orientable.[3]
  • Computing the girth (size of the smallest circuit), size of the largest circuit, number of circuits, number of bases, number of flats, number of maximum-rank flats, size of the largest flat, Tutte polynomial, or connectivity of a given matroid.[3]

Among the set of all properties of  -element matroids, the fraction of the properties that do not require exponential time to test goes to zero, in the limit, as   goes to infinity.[6]

See also

edit

Notes

edit
  1. ^ a b Robinson & Welsh (1980); Hausmann & Korte (1981); Coullard & Hellerstein (1996).
  2. ^ a b Edmonds (1971).
  3. ^ a b c d e Jensen & Korte (1982).
  4. ^ Mayhew (2008).
  5. ^ Piff & Welsh (1971); Piff (1973); Knuth (1974); Bansal, Pendavingh & van der Pol (2012).
  6. ^ a b Robinson & Welsh (1980).
  7. ^ For additional research on matroids based on the independence function axiomatization, see e.g. Rado (1957), Lazarson (1958), and Ingleton (1959).
  8. ^ Lovász (1981); Seymour (1981); Seymour & Walton (1981); Jensen & Korte (1982); Truemper (1982).
  9. ^ a b c d e f Robinson & Welsh (1980); Hausmann & Korte (1981).
  10. ^ E.g. see Cunningham (1986), Kelmans & Polesskiĭ (1994) Fujishige & Zhang (1995), Chávez Lomelí & Welsh (1996), Khachiyan et al. (2005), and Oum & Seymour (2007).
  11. ^ Azar, Broder & Frieze (1994).
  12. ^ Karp, Upfal & Wigderson (1988); Coullard & Hellerstein (1996).
  13. ^ Edmonds (1971); Robinson & Welsh (1980); Hausmann & Korte (1981).
  14. ^ a b Hausmann & Korte (1981).
  15. ^ a b Coullard & Hellerstein (1996).
  16. ^ Edmonds (1965); Cunningham (1986).
  17. ^ Bixby & Cunningham (1979). A paper claiming a similar result for any fixed constant   was announced by Cunningham and Edmonds at roughly the same time, but appears not to have been published. Truemper (1998), pp. 186–187, writes "Locating  -sums for general   is much more difficult ... We do not know how this can be efficiently accomplished for binary matroids, let alone for general matroids."
  18. ^ Seymour (1981).
  19. ^ Truemper (1982).
  20. ^ Oum & Seymour (2007).
  21. ^ Khachiyan et al. (2005).
  22. ^ Chávez Lomelí & Welsh (1996). In contrast, it is not possible for deterministic algorithms to approximate the number of bases of a matroid accurately in polynomial time (Azar, Broder & Frieze 1994).
  23. ^ a b Robinson & Welsh (1980); Jensen & Korte (1982).
  24. ^ As well as being in Jensen & Korte (1982), this formalization is surveyed in Korte & Schrader (1981). In most of the applications of this technique in Jensen & Korte (1982),   is uniform, but Seymour (1981) applies the same idea to a non-uniform but highly symmetric matroid.
  25. ^ Seymour & Walton (1981). Results of Seymour (1981) and Jensen & Korte (1982) give special cases of this for the problems of finding a   minor and a Vámos matroid minor, respectively. Testing whether a matroid is graphic or regular may be expressed in terms of a finite set of forbidden minors, and may be solved in polynomial time, but the forbidden minors for these problems include the uniform matroid  , so they do not contradict this impossibility result.
  26. ^ Seymour (1981) showed this for binary matroids, Seymour & Walton (1981) for finite fields, Truemper (1982) for arbitrary fields, and Jensen & Korte (1982) for the existence of a field over which the matroid is representable.
  27. ^ Lovász (1981); Jensen & Korte (1982). However, the special case of this problem for bipartite graphs can be solved in polynomial time as a matroid intersection problem.

References

edit