Extending Common Intervals Searching from Permutations to Sequences
Irena Rusu
L.I.N.A., UMR 6241, Université de Nantes, 2 rue de la Houssiniére,
BP 92208, 44322 Nantes, France
Abstract
Common intervals have been defined as a modelisation of gene clusters in genomes represented either as permutations or as sequences. Whereas optimal algorithms for finding common intervals in permutations exist even for an arbitrary number of permutations, in sequences no optimal algorithm has been proposed yet even for only two sequences. Surprisingly enough, when sequences are reduced to permutations, the existing algorithms perform far from the optimum, showing that their performances are not dependent, as they should be, on the structural complexity of the input sequences.
In this paper, we propose to characterize the structure of a sequence by the number of different dominating orders composing it (called the domination number), and to use a recent algorithm for permutations in order to devise a new algorithm for two sequences. Its running time is in , where are the sizes of the two sequences, are their respective domination numbers, is the alphabet size and is the number of solutions to output. This algorithm performs better as and/or reduce, and when the two sequences are reduced to permutations (i.e. when ) it has the same running time as the best algorithms for permutations. It is also the first algorithm for sequences whose running time involves the parameter size of the solution. As a counterpart, when and are of and respectively, the algorithm is less efficient than other approaches.
1 Introduction
One of the main assumptions in comparative genomics is that a set of genes occurring in neighboring locations within several genomes represent functionally related genes [galperin2000s, lathe2000gene, tamames2001evolution]. Such clusters of genes are then characterized by a highly conserved gene content, but a possibly different order of genes within different genomes. Common intervals have been defined to model clusters [UnoYagura], and have been used since to detect clusters of functionally related genes [overbeek1999use, tamames1997conserved], to compute similarity measures between genomes [BergeronSim, AngibaudHow] and to predict protein functions [huynen2000predicting, von2003string].
Depending on the representation of genomes in such applications, allowing or not the presence of duplicated genes, comparative genomics requires for finding common intervals either in sequences or in permutations over a given alphabet. Whereas the most general  and thus useful in practice  case is the one involving sequences, the easiest to solve is the one involving permutations. This is why, in some approaches [AngibaudApprox, angibaud2006pseudo], sequences are reduced to permutations by renumbering the copies of the same gene according to evolutionary based hypothesis. Another way to exploit the performances of algorithms for permutations in dealing with sequences is to see each sequence as a combination of several permutations, and to deal with these permutations rather than with the sequences. This is the approach we use here.
In permutations on elements, finding common intervals may be done in time where is the number of permutations and the number of solutions, using several algorithms proposed in the literature [UnoYagura, BergeronK, heber2011common, IR2013]. In sequences (see Table 1), even when only two sequences and of respective sizes and are considered, the best solutions take quadratic time. In a chronological order, the first algorithm is due to Didier [didier2003common] and performs in time and space. Shortly later, Schmidt and Stoye [schmidt2004quadratic] propose an algorithm which needs space, and note that Didier’s algorithm may benefit from an existing result to achieve running time whereas keeping the linear space. Both these algorithms use to define, starting with a given element of it, growing intervals of with fixed leftpoint and variable rightpoint, that are searched for into . Alternative approaches attempt to avoid multiple searches of the same interval of , due to multiple locations, by efficiently computing all intervals in and all intervals in before comparing them. The best running time reached by such an algorithm is in , obtained by merging the fingerprint trees proposed in [kolpakov2008new], where (respectively ) is the number of maximal locations of the intervals in (respectively ), and is the size of the alphabet. The value (and similarly for ) is in and does not exceed .
The running times of all the existing algorithms have at least two main drawbacks: first, they do not involve at all the number of output solutions; second, they insufficiently exploit the particularities of the two sequences and, in the particular case where the sequences are reduced to permutations, need quadratic time instead of the optimal time for two permutations on elements. That means that their performances insufficiently depend both on the inherent complexity of the input sequences, and on the amount of results to output. Unlike the algorithms dealing with permutations, the algorithms for sequences lack of criteria allowing them to decide when the progressive generation of a candidate must be stopped, since it is useless. This is the reason why their running time is independent of the number of output solutions. This is also the reason why when sequences are reduced to permutations the running time is very unsatisfactory.
Sequence type  Didier [didier2003common]  Schmidt and Stoye[schmidt2004quadratic]  Kolpakov and Raffinot [kolpakov2008new]  Our algorithm 

Seq. vs. Seq.  
Perm. vs. Seq.  
Perm. vs. Perm.  
Memory space 
The most recent optimal algorithm for permutations [IR2013] proposes a general framework for efficiently searching for common intervals and all of their known subclasses in permutations, and has a twofold advantage, not proposed by other algorithms. First, it permits an easy and efficient selection of the common intervals to output based on two types of parameters. Second, assuming one permutation has been renumbered to be the identity permutation, it outputs all common intervals with the same minimum value together and in increasing order of their maximum value. We use here these properties to propose a new algorithm for finding common intervals in two sequences. Our algorithm strongly takes into account the structure of the input sequences, expressed by the number of different dominating orders (which are permutations) composing the sequence ( for permutations). Consequently, it has a complexity depending both on this structure and on the number of output solutions. It runs in optimal time for two permutations on elements, is better than the other algorithms for sequences composed of few dominating orders and, as a counterpart, it performs less well as the number of composing dominating orders grows.
The structure of the paper is as follows. In Section 2 we define the main notions, including that of a dominating order, and give the results allowing us a first simplification of the problem. In Section 3 we propose our approach for finding common intervals in two sequences based on this simplification, for which we describe the general lines. In Sections 4, 5 and 6 we develop each of these general lines and prove correctness and complexity results. Section 7 is the conclusion.
2 Preliminaries
Let be a sequence of length over an alphabet . We denote the length of by , the set of elements in by , the element of at position , , by and the subsequence of delimited by positions (included), with , by . An interval of is any set of integers from such that there exist with and . Then is called a location of on . A maximal location of on is any location such that neither nor is a location of .
When is the identity permutation , we denote , which is also . Note that all intervals of are of this form, and that each interval has a unique location on . When is an arbitrary permutation on elements (denoted in this case), we denote by the function which associates with each element of its position in . For a subsequence of , we also say that it is delimited by its elements and located at positions and . These elements are the delimiters of (note the difference between delimiters, which are elements, and their positions).
We now define common intervals of two sequences and of respective sizes and :
Definition 1.
[didier2003common, schmidt2004quadratic] A common interval of two sequences and over is a set of integers that is an interval of both and . A maximal location of is any pair of maximal locations of on (this is ) and respectively on (this is ).
Example 1.
Let
The problem we are concerned with is defined below. We assume, without loss of generality, that both sequences contain all the elements of the alphabet, so that .
Common Intervals Searching
Input:  Two sequences and of respective lengths and over an alphabet . 

Requires:  Find all maximal locations of common intervals of and , without redondancy. 
To address this problem, assume we add a new element (not in ) at positions 0 and of . Let Succ be the size array defined for each position with by if and is the smallest with this property (if does not exist, then ). Call the area of the position on the sequence .
Example 2.
With
Definition 2.
[didier2003common] The order associated with a position of , , is the sequence of all elements in ordered according to their first occurrence in . We note .
Remark 1.
Note that:
may be empty, and this holds iff .
if is not empty, then its first element is .
if is not empty, then contains each element in exactly once, and is thus a permutation on a subset of .
In the subsequent, we consider that a pretreatment has been performed on , removing every element which is equal to , , such that to guarantee that no empty order exists. In this way, the maximal locations are slightly modified, but this is not essential.
Let respectively be the positions in of the elements defining , i.e. the position in of their first occurrences in . Now, define to be the ordered sequence of these positions.
Example 3.
With
Definition 3.
Given a sequence and an interval of it, a maxmin location of on is any location of which is left maximal and right minimal, that is, such that neither nor is a location of on . A maxmin location of is any pair of maxmin locations of on (this is ) and respectively on (this is ).
It is easy to see that that maxmin locations and maximal locations are in bijection. We make this more precise as follows.
Claim 1.
The function associating with each maximal location of an interval in the maxmin location in such that is maximum with the properties and is a bijection. Moreover, if , then may be computed in when and are known.
Proof. It is easy to see that by successively removing from the rightmost element as long as it has a copy on its left, we obtain a unique interval such that is a minmax location of , and is maximum with this property. The inverse operation builds when is given.
Moreover, if , then . Then, assuming and are known and we want to compute , we have two cases. If , then is the position of the last element in and thus is computed as . If , then is the position in of the element preceding , that is, .
In the subsequent, and due to the preceding Claim, we solve the Common Interval Searching problem by replacing maximal locations with maxmin locations. Using Claim 1, it is also easy to deduce that:
Claim 2.
[didier2003common] The intervals of are the sets with . As a consequence, the common intervals of and are the sets with , which are also intervals of .
With these precisions, Didier’s approach [didier2003common] consists then in considering each order and, in total time (reducible to according to [schmidt2004quadratic]), verifying whether the intervals with are also intervals of . Our approach avoids to consider each order by defining dominating orders which contain other orders, with the aim of focalising the search for common intervals on each dominating order rather than spreading it on each of the orders it dominates.
We introduce now the supplementary notions needed by our algorithm.
Definition 4.
Let be two integers such that . We say that the order dominates the order if is a contiguous subsequence of . We also say that is dominated by .
Equivalently, is a contiguous subsequence of and the positions on of their common elements are the same.
Definition 5.
Let be such that . Order is dominating if it is not dominated by any other order of . The number of dominating orders of is the domination number of .
The set of orders of is provided with an order, defined as iff . For each dominating order of , its strictly dominated orders are the orders with such that is dominated by but is not dominated by any order preceding according to .
Example 4.
The orders of
For each dominating order (which is a permutation), we need to record the suborders which correspond to the strictly dominated orders. Only the left and right endpoints of each suborder are recorded, in order to limit the space and time requirements. Then, let the domination function of a dominating order be the partial function defined as follows.
For the other values of , is not defined. Note that , since by definition any dominating order strictly dominates itself. See Figure 2.
Example 5.
For
We know that, according to Claim 2, the common intervals of and must be searched among the intervals or, if we focus on one dominating order and its strictly dominated orders identified by , among the intervals for which is defined and . We formalize this search as follows.
Definition 6.
Let be a permutation on elements, and be a partial function such that and for all values for which is defined. A location of an interval of is valid with respect to if is defined for and .
Claim 3.
The maxmin locations of common intervals of and are in bijection with the triples such that:
is a dominating order of
the location on of the interval is valid with respect to
is a maxmin location of on .
Moreover, the triple associated with satisfies : is the dominating order that strictly dominates , and .
Proof. See Figure 2. By Claim 2, the common intervals of and are the sets with which are intervals of . We note that the sets are not necessarily distinct, but their locations on , given by , are distinct. Then, the maxmin locations of common intervals are in bijection with the pairs such that is a maxmin location of the interval on , which are themselves in bijection with the pairs such that the dominating order strictly dominates and is valid with respect to . More precisely, .
Corollary 1.
Each maxmin location of a common interval of and is computable in time if the corresponding triple and the sequence are known.
Looking for the maxmin locations of the common intervals of and thus reduces to finding the maxmin locations of common intervals for each dominating order and for , whose locations on are valid with respect to the dominating function of . The central problem to solve now is thus the following one (replace by , by and by ):
Guided Common Intervals Searching
Input:  A permutation on elements, a sequence of length on the same set of elements, a partial function such that and for all such that is defined. 

Requires:  Find all  maxmin locations of common intervals of and whose locations on are valid with respect to , without redondancy. 
As before, we assume w.l.o.g. that contains all the elements in , so that . Also, we denote . In this paper, we show (see Section 3, Theorem 1) that Guided Common Intervals Searching may be solved in time and space, where is its number of solutions for and . This running time gives the running time of our general algorithm. However, an improved running time of for solving Guided Common Intervals Searching would lead to a algorithm for the case of two sequences, improving the complexity of the existing algorithms.
3 The approach
The main steps for finding the maxmin locations of all common intervals in two sequences using the reduction to Guided Common Intervals Searching are given in Algorithm 1. Recall that for and we respectively denote their sizes, and their dominating numbers. The algorithms for computing each step are provided in the next sections.
To make things clear, we note that the dominating orders (steps 1 and 2) are computed but never stored simultaneously, whereas dominated orders are only recorded as parts of their corresponding dominating orders, using the domination functions. The initial algorithm for computing this information, in step 1 (and similarly in step 2), is too time consumming to be reused in steps 3 and 4 when dominating orders are needed. Instead, minimal information from steps 1 and 2 is stored, which allows to recover in steps 3 and 4 the dominating orders, with a more efficient algorithm. In such a way, we keep the space requirements in , and we perform steps 3, 4, 5 in global time , which is the best we may hope.
In order to solve Guided Common Intervals Searching, our algorithm cuts into dominating orders and then it looks for common intervals in permutations. This is done in steps 2, 4 and 5, as proved in the next theorem.
Theorem 1.
Steps 2, 4 and 5 in Algorithm 1 solve Guided Common Intervals Searching with input , and . Moreover, these steps may be performed in global time and space.
Proof. Claim 3 and Corollary 1 insure that the maxmin locations of common intervals of and , in this precise order, are in bijection with (and may be easily computed from) the triples such that is a dominating order of , is valid with respect to and is a maxmin location of on . Note that since is a permutation, each location is a maxmin location. Reducing these triples to those for which is valid w.r.t. , as indicated in step 5, we obtain the solutions of Guided Common Intervals Searching with input , and .
In order to give estimations of the running time and memory space, we refer to results proved in the remaining of this paper. Step 2 takes time and space assuming the orders are not stored (as proved in Section 4, Theorem 3), step 4 needs time and space to successively generate the orders from information provided by step 2 (Section 5, Theorem 4), whereas step 5 takes time and space, where is the number of solutions for Guided Common Intervals Searching (Section 6, Theorem 6).
Example 6.
With
Theorem 2.
Algorithm 1 solves the Common Intervals Searching problem in time, where is the size of the solution, and space.
We now discuss the running time and memory space, once again referring to results proved in the remaining sections. As proved in Theorem 3 (Section 4), Step 1 (and similarly Step 2) takes time and space, assuming that the dominating orders are identified by their position on and are not stored (each of them is computed, used to find its dominating function and then discarded). The positions corresponding to dominating orders are stored in decreasing order in a stack . The values of the dominating functions are stored as lists, one for each dominating order , whose elements are the pairs , in decreasing order of the value . This representation needs a global memory space of .
In step 3 the progressive computation of the dominating orders is done in time and space using the sequence and the list of positions of the dominating orders. The algorithm achieving this is presented in Section 5, Theorem 4. For each dominating order of , the orders of are successively computed in global time and space by the same algorithm, and are only temporarily stored. Step 5 is performed for and in time and space, where is the number of output solutions for Guided Common Intervals Searching (Section 6, Theorem 6).
Then the abovementioned running time of our algorithm easily follows.
To simplify the notations, in the next sections the size of is denoted by and its domination number is denoted . The vector Succ, as well as the vectors Prec and defined similarly later, are assumed to be computed once at the beginning of Algorithm 1.
4 Finding the dominating and dominated orders of
This task is subdivided into two parts. First, the dominating orders are found as well as, for each of them, the set of positions such that strictly dominates . Thus , where is known but is not known yet. In the second part of this section, we compute . Note that in this way we never store any dominated order, but only its position on and on the dominating order strictly dominating it. This is sufficient to retrieve it from when needed.
4.1 Find the positions such that is dominating/dominated
As before, let be the first sequence, with an additional element (new character) at positions 0 and . Recall that we assumed that neighboring elements in are not equal, and that we defined Succ to be the size array such that, for all with , if and is the smallest with this property (if does not exist, then ).
Given a subsequence of , slicing it into singletons means adding the character at the beginning and the end of , as well as a socalled separator (denoted ) after each element of which is the letter . And this, for each . Call the resulting sequence on .
Example 7.
With
Once is obtained from , successive removals of the separators are performed, and the resulting sequence is still called . Let a slice of be any maximal interval of positions in (recall that ) such that no separator exists in between and with . Note that in this case a separator exists after and a separator exists after , because of the maximality of the interval . With as defined above, immediately after has been sliced, every position in forms a slice.
Example 8.
With and obtained by slicing into singletons as in the preceding example, let now
Slices are disjoint sets which evolve from singletons to larger and larger disjoint intervals using separator removals. Two operations are needed, defining  as the reader will easily note  a UnionFind structure:

Remove a separator, thus merging two neighboring slices into a new slice. This is set union, between sets representing neighboring intervals.

Find the slice a position belongs to. In the algorithm we propose, this function is denoted by .
In the following, a position is resolved if its order has already been identified, either as a dominating or as a dominated order. Now, by calling Resolve() in Algorithm 2 successively for all (initially nonresolved), we find the dominating orders of and, for each of them, the positions such that is strictly dominated by . Note that the rightmost position of each dominated by is computed by the procedure RightEnd(), given in Section 4.2.
Example 9.
With
To prove the correctness of our algorithm, we first need two results.
Claim 4.
Order with is dominated by order iff and and .
Proof. Notice that, by definition, the positions in belong to .
””: Properties and are deduced directly from the definitions of an order and of order domination. If the condition is not true, then belongs to but not to (again by the definition of an order), a contradiction. Moreover, if, by contradiction, there is some , occurring respectively in positions and (choose each of them as small as possible with and ), then and , since only the first occurrence of is recorded in . But then and thus is not dominated by , a contradiction.
””: Let . Then the first occurrence of the element in is, by definition, at position . Moreover, by hypothesis and since , we deduce that the first occurrence of the element in is at position . Thus . It remains to show that is contiguous inside . This is easy, since any position in , not in but located between two elements of would imply the existence of an element whose first occurrence in belongs to ; this element would then belong to , and its position to , a contradiction.
Claim 5.
Let , and assume is dominating. Then is labeled as ”dominated by ” in Resolve() iff is strictly dominated by .
Proof. Note that may get a label during Resolve() iff is not resolved at the beginning of the procedure, in which case steps 23 of Resolve() insure that is labeled as ”dominating”. By hypothesis, we assume this label is correct. Now, is labeled as ”dominated by ” iff
(step 5), and
in step 7 we have that is not already resoved, and are in the same slice in the sequence where all the separators satisfying and have been removed (step 6).
The latter of the two conditions is equivalent to saying that contains only characters equal to , and , that is, only characters whose first occurrence in belongs to . This is equivalent to (i.e. no character in appears before ) and (all characters in have a first occurrence not later than ). But then the three conditions on the right hand of Claim 4 are fulfilled, and this means is dominated by . Given that step 8 is executed only once for a given position , that is, when is labeled as resolved, the domination is strict.
Now, the correctness of our algorithm is given by the following claim.
Claim 6.
Assume temporarily that the procedure is empty. Then calling Resolve() successively for correctly identifies the dominating orders and, for each of them, the positions such that strictly dominates . This algorithm takes time and space.
Proof. We prove by induction on that, at the end of the execution of Resolve(, we have for all with :
is labeled as ”dominating” iff and is dominating
is labeled as ”dominated by ” iff and is dominating and is strictly dominated by .
Say that a position is used if is unresolved when Resolve() is called. We consider two cases.
Case . The position is necessarily used (no position is resolved yet), thus is labeled as ”dominating” (step 3) and no other order will have this label during the execution of Resolve(). Now, is really dominating, as there is no , and property is proved. To prove , recalling that and , we apply Claim 5. Note that since in step 7 is already resolved.
Case . Assume by induction the affirmation we want to prove is true before the call of Resolve(). If is not used, that means is already resolved when Resolve() is called, and nothing is done. Properties are already satisfied due to the position such that dominates .
Assume now that is used. Then is labeled ”dominating” and we have to show that is really dominating. If this was not the case, then would be strictly dominated by some with , and by the inductive hypothesis it would have been labeled as so (property for ). But this contradicts the assumption that is unresolved at the beginning of Resolve(). We deduce that holds. To prove property , notice that it is necessarily true for and the corresponding dominated orders, by the inductive hypothesis and since Resolve() does not relabel any labeled order. To finish the proof of