Mimicking Behaviors in Separated Domains

Devising a strategy to make a system mimicking behaviors from another system is a problem that naturally arises in many areas of Computer Science. In this work, we interpret this problem in the context of intelligent agents, from the perspective of LTLf, a formalism commonly used in AI for expressing finite-trace properties. Our model consists of two separated dynamic domains, D_A and D_B, and an LTLf specification that formalizes the notion of mimicking by mapping properties on behaviors (traces) of D_A into properties on behaviors of D_B. The goal is to synthesize a strategy that step-by-step maps every behavior of D_A into a behavior of D_B so that the specification is met. We consider several forms of mapping specifications, ranging from simple ones to full LTLf, and for each we study synthesis algorithms and computational properties.


Introduction
Mimicking a behavior from a system A to a system B is a common practice in Computer Science (CS) and Software Engineering (SE).Examples include a robot that has to real-time adapt a human behavior (Mitsunaga, Smith, Kanda, Ishiguro, & Hagita, 2008), or simultaneous interpretation of a speaker (Yarmohammadi, Sridhar, Bangalore, & Sankaran, 2013;Zheng, Liu, Zheng, Ma, Liu, & Huang, 2020).The challenge in behavior mimicking is twofold.Firstly, a formal specification of mimicking is needed; indeed, being potentially different, systems A and B may show substantially different behaviors, not directly comparable, thus a relationship, or map, between them must be formally defined to capture when a behavior from A is correctly mimicked by one from B. Secondly, since B ignores what A will do next, B must monitor the actions performed by A and perform its own actions, in such a way that the resulting behavior of B mimics that of A.
In this work, we look at the problem of devising a strategy for mimicking behaviors when the mapping specification is expressed in Linear Temporal Logic on finite traces (ltl f ) (De Giacomo & Vardi, 2013), a formalism commonly used in AI for expressing finite-trace properties.In our framework, systems A and B are modeled by two separated dynamic domains, D A and D B , in turn modeled as transition systems, over which there are agents A and B that respectively act, without affecting each other.The mapping specification is then a set of ltl f formulas to be taken in conjunction, called mappings, that essentially relate the behaviors of A to those of B. While B has full knowledge of both domains and their states, it has no idea which action A will take next.Nevertheless, in order to perform mimicking, B must respond to every action that A performs on D A by performing one action on D B .As this interplay proceeds, D A and D B traverse two respective sequences of states (traces) which we call the behaviors of A and B, respectively.The process carries on until either A or B (depending on the variant of the problem considered) decides to stop.The mimicking from A has been accomplished correctly, i.e., agent B wins, if the resulting traces satisfy the ltl f mapping specification.Our goal is to synthesize a strategy for B, i.e., a function returning an action for B given those executed so far by agent A, which guarantees that B wins, i.e., is able to mimic, respecting the mappings, every behavior of A. We call this the Mimicking Behavior in Separated Domains (MBSD) problem.
The mapping specifications can vary, consequently changing the nature of the mimicking, and consequently the difficulty of synthesizing a strategy for B. We study three different types of mappings.The first is the class of point-wise mappings, which establish a sort of local connection between the two separated domains.Point-wise mapping specifications have the form i≤k (φ i → ψ i ) (see Section 2.2 for proper ltl f definition) where each φ i is a Boolean property over D A and each ψ i is a Boolean property over D B .Point-wise mappings indicate invariants that are to be kept throughout the interaction between the agents.In Section 4.1 we give a detailed example of point-wise mappings from the Pac-Man world.
The second class is that of target mappings, which relate the ability of satisfying corresponding reachability goals (much in the same fashion as Planning) in the two separate domains.Target mapping specifications have the form i≤k (♦φ i → ♦ψ i ), where φ i and ψ i are Boolean properties over D A and D B , respectively.Target mappings define objective for A and B and require that if A meets its objective then B must meet its own as well, although not necessarily at the same time.We give a detailed example of target mappings in Section 5.1, from the Rubik's cube world.The last class is that of general ltl f mappings.A general ltl f mapping specification has the form of an arbitrary ltl f formula Φ with properties over D A and D B .
Our objective is to characterize solutions for strategy synthesis for mimicking behaviors under the types of mapping specifications described above, from both the algorithmic and the complexity point of view.The input we consider includes both domains D A and D B , and the mapping specification.Since it is common to focus on problems in which either of the two is fixed (e.g.(De Giacomo & Rubin, 2018)), we provide solutions in terms of: combined complexity, where neither the size of the domain nor that of the mapping specification are fixed; mapping complexity, where domains' size are fixed but mapping specification's varies; and domain complexity, where the mapping specification's size is fixed but domains' vary.
For our analysis, we formalize the problem as a two-player game between agent A (Player 1) and agent B (Player 2) over a game graph that combines both domains D A and D B , with the winning objective varying in the classes discussed above.We start with point-wise mappings where A decides when to stop and derive a solution in the form of a winning strategy for a safety game in PTIME wrt combined, mapping and domain complexity.The scenario becomes more complex for target mappings, where the agent B decides when to stop, and where some objectives met during the agent's interplay must be recorded.We devise an algorithm exponential in the number of constraints, and show that the problem is in PSPACE for combined and mapping complexity, and PTIME in domain complexity.To seal the complexity of the problem, we provide a PSPACE-hardness proof for combined complexity, already for simple acyclic graph structures.For domains whose transitions induce a tree-like structure, however, we show that the problem is still in PTIME for combined, mapping and domain complexity.Finally, we show that the problem with general ltl f mapping specifications is in 2EXPTIME for combined and mapping complexity, due to the doubly-exponential blowup of the DFA construction for ltl f formulas, and is PTIME in domain complexity.
The rest of the paper goes as follows.In Section 2 we give preliminaries, and we formalize our problem in Section 3. We give detailed examples and analyses of point-wise and target mapping specifications in Sections 4 and 5 respectively.We discuss solution for general mapping specifications in Section 6.Then we provide a more detailed discussion about related work in Section 7, and conclude in Section 8.

Preliminaries
We briefly recall preliminary notions that will be used throughout the paper.

Boolean Formulas
Boolean (or propositional) formulas are defined, as standard, over a set of propositional variables (or, simply, propositions) Prop, by applying the Boolean connectives ∧ (and), ∨ (or) and ¬ (not).Standard abbreviations are → (implies), true (also denoted ⊤) and false (also denoted ⊥).A proposition p ∈ Prop occurring in a formula is called an atom, a literal is an atom or a negated atom ¬p, and a clause is a disjunction of literals.A Boolean formula is in Conjunctive Normal Form (CNF), if it is a conjunction of clauses.The size of a Boolean formula ϕ, denoted |ϕ|, is the number of connectives occurring in ϕ.A Quantified Boolean Formula (QBF) is a Boolean formula, all of whose variables are universally or existentially quantified.A QBF formula is in Prenex Normal Form (PNF) if all quantifiers occur in the prefix of the formula.True Quantified Boolean Formulas (TQBF) is the language of all QBF formulas in PNF that evaluate to true.TQBF is known to be PSPACE-complete.

LTL f Basics
Linear Temporal Logic over finite traces (ltl f ) is an extension of propositional logic to describe temporal properties on finite (unbounded) traces (De Giacomo & Vardi, 2013).ltl f has the same syntax as ltl, one of the most popular logics for temporal properties on infinite traces (Pnueli, 1977).Given a set of propositions Prop, the formulas of ltl f are generated by the following grammar: where p ∈ Prop, • is the next temporal operator and U is the until temporal operator, both are common in ltl f .We use common abbreviations for eventually ♦ϕ ≡ true U ϕ and always as ϕ ≡ ¬♦¬ϕ.
A word over Prop is a sequence π = π 0 π 1 • • • , s.t.π i ⊆ 2 Prop , for i ≥ 0. Intuitively, π i is interpreted as the set of propositions that are true at instant i.In this paper we deal only with finite, nonempty words, i.e., π = π 0 Given a finite word π and an ltl f formula ϕ, we inductively define when ϕ is true on π at instant i ∈ {0, . . ., last (π)}, written π, i |= ϕ, as follows: In this paper, we make extensive use of ϕ and ♦ϕ.
We say that π ∈ (2 Prop ) + satisfies an ltl f formula ϕ, written π |= ϕ, if π, 0 |= ϕ.For every ltl f formula ϕ defined over Prop, we can construct a Deterministic Finite Automaton (DFA) F ϕ that accepts exactly the traces that satisfy ϕ (De Giacomo & Vardi, 2013).More specifically, F ϕ = (2 Prop , Q, q 0 , η, acc), where 2 Prop is the alphabet of the DFA, Q is the finite set of states, q 0 ∈ Q is the initial state, η : Q × 2 Prop → Q is the transition function, and acc ⊆ Q is a set of accepting states.

Two-player Games
A (turn-based) two-player game models a game between two players, Player 1 (P 1) and Player 2 (P 2), formalized as a pair G = (A, W ), with A the game arena and W the winning objective.The arena A = (U, V, u 0 , α, β) is essentially a bipartite-graph, where: • U is a finite set of P 1 nodes; • V is a finite set of P 2 nodes; • u 0 ∈ U is the initial node; Intuitively, a token initially in u 0 is moved in turns from nodes in U to nodes in V and vice-versa.P 1 moves when the token is in a node u ∈ U , by choosing a destination node v ∈ V for the token, such that (u, v) ∈ α.P 2 acts analogously, when the token is in a node v ∈ V , by choosing a node u ∈ U according to β.Thus, P 1 and P 2 alternate their moves, with P 1 playing first, until at some point, after P 2 has moved, the game stops.As the token visits the nodes of the arena, it defines a sequence of alternating U and V nodes called play.If, when the game stops, the play meets W , then P 2 wins, otherwise P 1 wins.Formally, a play (of + is a finite, nonempty sequence of nodes such that: • n is even (which implies, by α and β, that ρ n ∈ U ).Let P lays A be the set of all plays of A and let last(ρ) = n be the last position (index) of play ρ. ρ| The winning objective W is a (compact) representation of a set of plays, called winning plays.P 2 wins if the game produces a winning play, otherwise P 1 wins.A strategy for P 2 is a function σ : V + → U , which returns a P 1 node u ∈ U , given a finite sequence of P 2 nodes.A strategy σ is said to be memory-less if, for every two sequences of nodes , whenever w n = w m , it holds that σ(w) = σ(w ′ ); in other words, the move returned by σ is a function of the last node in the sequence.A play ρ is compatible with a P 2 strategy σ if ρ i+1 = σ(ρ i | V ), for i = 0, . . ., last (ρ) − 1.A P 2 strategy σ is winning in G = (A, W ), if every play ρ compatible with σ is winning.
In this paper we consider two classes of games.The first class is that of reachability games in which for a set g ⊆ U of P 1 nodes, W = Reach(g), where Reach(g) (reachability objective) is the set of plays containing at least one node from g. Formally Reach(g) = {ρ ∈ P lays A | there exists k.0 ≤ k ≤ last (ρ) : ρ k ∈ g}.
The second class is that of safety games, in which again for a set g ⊆ U of P 1 nodes, W = Safe(g), where Safe(g) (safety objective) is the set of plays where all P 1 nodes are from g. Formally, Safe(g) = {ρ ∈ P lays A | for all even k.0 ≤ k ≤ last (ρ) : ρ k ∈ g}.Both reachability and safety games can be solved in PTIME in the size of G, and if there is a winning strategy for P 2 in G then, and only then, there is a winning memory-less strategy for P 2 in G (Martin, 1975).

Mimicking Behaviors in Separated Domains
The problem of mimicking behaviors involves two agents, A and B, each operating in its own domain, D A and D B respectively, and requires B to "correctly" mimic in D B , the behavior (i.e., a trace) exhibited by A in D A .The notion of "correct mimicking" is formalized by a mapping specification, or simply mapping, which is an ltl f formula, specifying when a behavior of A correctly maps into one of B. The agents alternate their moves on their respective domains, with A starting first, until one of the two decides to stop.Only one agent A and B, designated as the stop agent, has the power to stop the process, and can do so only after both A and B have moved in the last turn.The mapping constraint is evaluated only when the process has stopped.
The dynamic domains where agents operate are modeled as labelled transition systems.
Definition 1 (Dynamic Domain).A dynamic domain over a finite set Prop is a tuple D = (S, s 0 , δ, λ), s.t.: • S is the finite set of domain states; • s 0 ∈ S is the initial domain state; • δ ⊆ S × S is the transition relation; • λ : S → 2 Prop is the state-labeling function.
With a slight abuse of notation, for every state s ∈ S, we define the set of possible successors of s as δ(s) = {s ′ | (s, s ′ ) ∈ δ}.D is deterministic in the sense that given s, the agent operating in D can select the transition leading to the next state s ′ from those available in δ(s).Without loss of generality, we assume that D is serial, i.e., δ(s) = ∅ for every state s ∈ S. A finite trace of D is a sequence of states τ = s 0 • • • s n s.t.s i+1 ∈ δ(s i ), for i = 0, . . ., n − 1. Infinite traces are defined analogously, except that i = 0, . . ., ∞.By |τ | we denote the length of τ , i.e., the (possibly infinite) number of states it contains.In the following, we simply use the term trace for a finite trace, and explicitly specify when it is infinite.
We next model the problem of mimicking behaviors by two dynamic systems over disjoint sets of propositions, together with an ltl f formula specifying the mapping, and the designation of the stop agent.
Definition 2. An instance of the Mimicking Behaviors in Separated Domains (MBSD) problem is a tuple P = (D A , D B , Φ, Ag stop ), where: • Φ is the mapping specification, i.e., an ltl f formula over Prop A ∪ Prop B ; • Ag stop ∈ {A, B} is the designated stop agent.
Intuitively, a solution to the problem is a strategy for agent B that allows B to step-bystep map the observed behavior of agent A into one of its behaviors, in such a way that the mapping specification is satisfied, according to the formalization provided next.
Formally, a strategy for agent B is a function σ : (S) + → T which returns a state of D B , given a sequence of states of D A .Observe that this notion is fully general and is defined on all D A 's state sequences, even non-traces.Among such strategies, we want to characterize those that allow B to satisfy the mapping specification by executing actions only on D B .
We say that a strategy σ is executable in P if: • for every trace When σ is executable, the trace τ B as above is called the trace induced by σ on τ A , and denoted as σ(τ A ).

For two traces τ
, respectively, we define their joint trace label, denoted λ(τ A , τ B ) as the word over 2 In words, λ(τ A , τ B ) is the word obtained by joining the labels of the states of τ A and τ B at same positions.
We can now characterize solution strategies.
Definition 3. A strategy σ is a solution to an MBSD problem instance P = (D A , D B , Φ, Ag stop ), if σ is executable in P and either: The definition requires that the strategy σ be executable in P, i.e., that σ returns an executable move for B, whenever A performs an executable move.Then, two cases are identified, which correspond to the possible designations of the stop agent.In case 1, the stop agent is A. In this case, since A can stop at any time point (unknown in advance by B), B must be able to continuously (i.e., step-by-step) mimic A's behavior, otherwise A could stop at a point where B fails to mimic.Case 2 is slightly different, as B can choose when to stop.In this case, σ must prescribe a sequence of moves, in response to A's, such that Φ is eventually (as opposed to continuously) satisfied, at which point B can stop the execution.Seen differently, σ must prevent A from moving indefinitely, over an infinite horizon (without B ever being able to mimic A).

Mimicking Behaviors with Point-wise Mapping Specifications
In this section, we explore mimicking specifications that are of point-wise nature.This setting requires that B, while mimicking A, constantly satisfies certain conditions, which can be regarded as invariants.Such a requirement is formally captured by the following specification, where ϕ i and ψ i are Boolean formulas over D A and D B , respectively: We first provide an illustrative example that demonstrates the use of point-wise mappings, then explore algorithmic and complexity results.

Point-wise Mapping Specifications in the Pac-Man World
In the popular game Pac-Man, the eponymous character moves in a maze to eat all the candies.Four erratic ghosts, Blinky, Pinky, Inky and Clyde, wander around, threatening Pac-Man, which cannot touch them or looses (we neglect the special candies with which Pac-Man can fight the ghosts).The ghosts cannot eat the candies.In the real game, the maze is continuous but, for simplicity, we consider a grid model where cells are identified by two coordinates.Also, we imagine a variant of the game where the ghosts can walk through walls.Pac-Man wins the stage when it has eaten all the candies.The ghosts end the game when this happens.
We model this scenario as an MBSD problem Q = (G, P, Φ, A), with domains P(ac-Man, agent B) and G(hosts, agent A).In P, states model Pac-Man's and candies's position, while transitions model Pac-Man's move actions.Pac-Man cannot walk through walls.A candy disappears when Pac-Man moves on it.Similarly, states of G model (all) ghosts' position, and transitions model ghosts' movements through cells.Each transition corresponds to a move of all ghosts at once.G does not model candies or walls, as they do not affect nor are affected by ghosts.
Assuming an N × N grid with some cells occupied by walls, domain P = (S, s 0 , δ p , λ p ) is as follows, where C is the set of cells (x, y) not containing a wall: • for every (x, y) ∈ C, introduce the Boolean propositions p x,y (Pac-Man at (x, y)) and c x,y (candy at (x, y)), and let Prop p be the set of all such propositions; • S ⊆ 2 (Prop p ) is the set of all interpretations over Prop p (represented as subsets of Prop p ), such that: every s ∈ S contains exactly one proposition p x,y (Pac-Man occupies exactly one cell); or walls contain a candy); • δ p is such that (s, s ′ ) ∈ δ p iff, for all (x, y) ∈ C: if p x,y ∈ s then p x ′ ,y ′ ∈ s ′ , with (x, y) ∈ {(x, y), (x, y + 1), (x, y − 1), (x + 1, y), (x − 1, y))} (Pac-Man moves at most by one cell, either horizontally or diagonally); if c x,y ∈ s and p x,y / ∈ s ′ then c x,y ∈ s ′ (all candies available in s remain so if not eaten by Pac-Man).
• λ p (s) = s.Domain G = (T, t 0 , δ g , λ g ) is defined in a similar way (we omit the formal details): we use propositions bk x,y , pk x,y , ik x,y , cd x,y for Blinky, Pinky, Inky and Clyde's position, respectively; T is the set of interpretations where each ghost occupies exactly one cell (possibly containing a wall; many ghosts may be in the same cell); the ghosts start at (N/2, N/2) (t 0 ); δ g models a 1-cell horizontal or diagonal move for all ghosts at once; λ g is the identity.
Pac-Man's primary goal (besides eating all candies) is to stay alive, which we formalize with the following point-wise mapping: Any strategy σ that is a solution to Q = (G, P, Φ, B) keeps Pac-Man alive.To enforce Φ, Pac-Man needs a strategy that prevents ending up in a cell where a ghost is.Notice that, to compute σ, one cannot proceed greedily by considering only one step at a time, but must plan over all future evolutions, to guarantee that Pac-Man does not eventually get trapped.With such σ, no matter when the ghosts end the game, Pac-Man will never lose (and, in fact, it will win, if the ghosts stop when all candies on the maze have been eaten).

Solving MBSD with Point-wise Mapping Specifications
We show how to solve an MBSD instance P by reduction to the problem of finding a winning strategy in a two-player game, for which algorithms are well known (Martin, 1975).Specifically, we construct a two-player game G P = (A, W ) that has a winning strategy iff P has a solution.
Given an MBSD instance P = (D A , D B , Φ, Ag stop ), with D A = (S, s 0 , δ A , λ A ) and D B = (T, t 0 , δ B , λ B ), we construct the game arena A = (U, V, u 0 , α, β), where: Intuitively, the nodes of A represent joint state configurations of both D A and D B (initially in their respective initial states), while the transition functions account for the moves A (modeled by P 1) and B (modeled by P 2) can perform, imposing, at the same time, their strict alternation.
As for the winning objective W , the key idea is that, since in point-wise mappings the temporal operator (always) distributes over conjunction, and since Ag stop = A, the conjuncts of the mapping are in fact propositional formulae to be guaranteed all along the agent behaviors, captured by plays of A. This can be easily expressed as a safety objective on A, as shown below.
Let Φ = k i=1 (ϕ i → ψ i ) be the (point-wise) mapping specification.We have that Φ ≡ Φ ′ , where Φ ′ ≡ k i=1 (ϕ i → ψ i ) is a Boolean formula where every ϕ i is over Prop A only and every ψ i over Prop B only.Therefore, in order to solve P, we need to find a strategy σ such that for every trace As a consequence of the above construction, we obtain the following result.
Lemma 1.There is a solution to P if and only if there is a solution to the safety game G P .
Proof.As an intuition, notice that once computed, a winning strategy for G P is essentially a solution to P. This, indeed, can be obtained by projecting away the V component of all the nodes in a play ρ, thus transforming ρ into a trace of D A .We now show the proof in detail.We first show that if there is a solution to P then there is a solution to G P .For that, we first show that if σ is an executable strategy for P then σ can be reduced to a strategy σ ′ for G P .To this end, consider a play ρ . By the definition of G P , τ is a trace of D A .Therefore, since σ is executable, σ is defined on τ .Thus, for ρ ′ = ρ • (s n+1 , t n ), where • denotes concatenation, we can define σ ′ (ρ ′ ) = (s n+1 , σ(τ )).Note that this is a proper definition since the trace σ(τ ) induced by σ on τ is a trace in D B , hence (t n , σ(τ )) ∈ δ B .Thus σ ′ is a proper strategy for G P .
Next, we need the following claim that describes the correspondence between σ and σ ′ .
For a proof of Claim 1, given a trace By the definition of G P and that of σ ′ provided above, it follows that the sequence ρ = (s again by the definition of G P and σ ′ , we have that the sequences Back to proving Lemma 1, since σ is a solution, every trace Since ρ is arbitrary, every play in G P compatible with σ ′ ends in a g node, hence σ ′ is a winning strategy for P 2 in Safe(g).That completes the first direction of the theorem.
For the other direction, assume that σ ′ is a strategy for G P .Define a strategy σ ′′ for P as follows.Define first σ ′′ (s 0 ) = t 0 .Then, For a play thus σ ′′ is an executable strategy in P.
To describe the correspondence between σ ′ and σ ′′ we make the next claim, completely analogous to Claim 1.
For a proof, given a trace By the definition of σ ′′ provided above, it follows that the sequence ρ = (s 0 , t 0 )(s 1 , t 0 ) • • • (s n , t n−1 )(s n , t n ) is a play of G P compatible with σ ′ .On the other hand, for a play ρ = (s 0 , t 0 ) • • • (s n , t n ) compatible with σ ′ , again by the definition of σ ′′ , we have that the sequences τ A = s 0 • • • s n and τ B = t 0 • • • t n are traces of, respectively D A and D B , such that τ B = σ′′ (τ A ). Now to conclude Lemma 1, assume that σ ′ is a winning strategy for P 2 in G P , with winning objective W = Safe(g).For a trace τ ) is a play of G P compatible with σ ′ .Moreover, since σ ′ is winning, for i = 1, . . ., 2n, ρ i ∈ g.But then, for all pairs (s, t) in ρ, we have that (λ A (s), λ B (t)) |= φ ′ , that is λ(τ A , τ B ) satisfies Φ.Since τ A is arbitrary, it follows that σ ′′ is a solution for P, which completes the proof.
Finally, the construction of the safety game G P together with Lemma 1 gives us the following result.
Theorem 1. Solving MBSD for point-wise mapping specifications is in PTIME for combined complexity, mapping complexity and domain complexity.
Proof.Given an MBSD instance P, we construct the safety game G P as shown.Observe that the construction of G P requires constructing the game arena A, which can be done in time polynomial in |D A | + |D B |, and setting the set of states g, which takes at most time O(|Φ ′ |) for each state in A. Finally by Lemma 1 we have that P has a solution if and only if G P has a solution, where solving a safety game takes linear time in the size of G P (Martin, 1975).
Observe that if D A and D B are represented compactly (logarithmically) using, e.g., logical formulas or PDDL specifications (Haslum, Lipovetzky, Magazzeni, & Muise, 2019), then the domain (and hence the combined) complexity becomes EXPTIME, and mapping complexity remains PTIME.Similar considerations hold also for the other cases that we analyze throughout the paper.

Mimicking Behaviors with Target Mapping Specifications
We now explore mimicking specifications that are of target nature.In this setting, B has to mimic A in such a way that whenever A reaches a certain target, so does B, although not necessarily at the same time step: B is free to reach the required target at the same time, later, or even before A does.For this to be possible, B must have the power to stop the game, which is what we assume here.Formally, target mapping specifications are formulas of the following form, where ϕ i and ψ i are Boolean properties over D A and D B , respectively: As before, we first give an illustrative example that demonstrates the use of target mappings, then we explore algorithmic and complexity results.

Target Mapping Specifications in Rubik's Cube
Two agents, teacher H and learner L are provided with two Rubik's cubes of different sizes: H has edge of size 4 whereas L has one of size 3. L wants to learn from H the main steps to solve the cube; to this end, H shows L how to reach certain milestone configurations on the cube of size 4 and asks L to replicate them on the cube of size 3, even in a different order.Milestones are simply combinations of solved faces, e.g., red and green, white and blue and yellow, or simply white.Obviously, L cannot blindly replicate H's moves, as the cubes are of different sizes and the actual sequences to solve the faces are different; thus, L must find its way to reach the same milestones as H, possibly in a different order.When L is tired, it can stop the learning process.
We model this scenario as an MBSD problem instance R = (H, L, Φ, B), where H and L model, respectively, H's and L's dynamic domain, i.e., the two cubes.The two domains are conceptually analogous but, modeling cubes of different sizes, they feature different sets of states and transitions, which correspond to cube configurations and possible moves, respectively.We model such domains parametrically wrt the size E of the edge.
Fix the cube in some position, name the faces as U (p), D(own), L(eft), R(ight), F (ront), B(ack), let F ac = {U, D, L, R, F, B}, and associate a pair of integer coordinates to each position in a face, so that every position is identified by a triple (f, x, y) ∈ P os = F ac × {0, . . ., E − 1} 2 .To model the color assigned to tile (f, x, y), we use propositions of the form c f,x,y , with c ∈ Col = {white, green, red, yellow, blue, orange}.Let Prop be the set of all such propositions.Finally, index the horizontal and vertical "slices" of the cube from 0 to E − 1.
The (parametric) dynamic domain for a Rubik's cube with edge of size E is the domain D(E) = (S, s 0 , δ, λ), where: • S ⊆ 2 P rop E is the set of all admissible (i.e., reachable) cube's configurations; among other constraints, omitted for brevity, this requires that, for every s ∈ S: for every (f, x, y) ∈ P os, there exists exactly one c ∈ C such that c f,x,y ∈ s (every position has exactly one color); • s 0 is an arbitrary state from S; • δ allows a transition from s to s ′ iff s ′ models a configuration reachable from s by a 90 • (clockwise or counter-clockwise) rotation of one of its 2 * E slices; • λ(s) = s.
We then define H = D(4) and L = D(3).To distinguish the elements of H from those of L, we use a primed version in the latter, e.g., P os ′ for positions, c ′ f,x,y for propositions, and so on.
As said, L's goal is to replicate the milestones shown by H.For every face f ∈ F ac, we define formula C f = (f,x,y)∈P os c f,x,y to express that the tiles of face f have all the same color c.For L, we correspondingly have C ′ f = (f,x,y)∈P os ′ c ′ f,x,y .We report below an example of target mappings: Observe that L has many ways to fulfill H's requests: for instance, by reaching a configuration where blue ′ R ∧ red ′ U ∧ white ′ L holds, it has fulfilled the first and the second request, even if the configuration was reached before H showed the milestones.Obviously, however, the last request cannot be fulfilled at the same time as the second one, as white ′ L clearly excludes ¬white ′ L , thus an additional effort by L is required to satisfy the specification.

Solving MBSD with Target Mapping Specifications
For target mappings as well, we reduce MBSD to strategy synthesis for a two-player game.
To this end, assume an MBSD instance P = (D A , D B , Φ, B) with mapping specification Φ = k i=1 (♦ϕ i ) → (♦ψ i ).To solve P, we must find a strategy σ such that for every infinite trace τ A ∞ = s 0 s 1 • • • of D A and every conjunct (♦ϕ i ) → (♦ψ i ) of Φ, if there exists an index j i such that λ A (s j i ) |= ϕ i , then there exist a finite prefix ∞ and an index l i such that, for σ(τ ) = t 0 • • • t n , we have that λ B (t l i ) |= ψ i (recall ϕ i and ψ i are Boolean formulae over Prop A only and Prop B only, respectively).As per Definition 3, this is equivalent to requiring that λ(τ The challenge in constructing σ is that the index l i may be equal, smaller or larger then j i .Thus σ needs to record which ϕ i or ψ i were already met during the trace, up to the current point.Since the number of possible traces to the current state may be exponential, keeping count of all possible options may be expensive.We first discuss general domain structure, then in Section 5.2.2 we explore a very specific tree-like structure.
For general domains, there may exist many traces ending in a given state, and each such trace contains states that satisfy, in general, different sub-formulas ϕ i and ψ i occurring in the mappings.Thus satisfaction of sub-formulas cannot be associated to states as done before, but must be associated to traces.In fact, to check whether a target mapping is satisfied, it is enough to remember, for every i = 1, . . ., k, whether A has satisfied ϕ i and/or B has satisfied ψ i , along a trace.This observation suggests to introduce a form of memory to record satisfaction of sub-formulas along traces.We do so by augmenting the game arena constructed in Section 4. In particular, we extend each node in the arena with an array of bits of size 2k to keep track of which sub-formulas ϕ i and ψ i were satisfied, along the play that led to the node, by some of the domain states contained in the nodes of the play.
Formally, let M = ({0, 1} 2 ) k and let [cd] = ((c 1 , d 1 ), . . ., (c k , d k )) denote the generic element of M .Given an MBSD instance P = (D A , D B , Φ, B), where D A = (S, s 0 , δ A , λ A ) and D B = (T, t 0 , δ B , λ B ), we define the game arena A = (U, V, u 0 , α, β) as follows: We then define the game structure G P = (A, W ), where W = Reach(g), with g = {u ∈ U | u = (s, t, [cd]), where [cd] is s.t.c i = 0 or d i = 1, for every i = 1, . . ., k}.Intuitively, g is the set of all nodes reached by a play such that if φ i is satisfied in the play (by a state of D A in some node of the play), then so is ψ i , for i = 1, . . ., k (by a state of D B in some node of the play).Thus, if a play contains a node from g then the corresponding traces of D A and D B , combined, satisfy all the mapping's conjuncts.
As a consequence of this construction, we obtain the following result, the full proof of which is in line of Lemma 1.
Lemma 2. There is a solution to P if and only if there is a winning strategy for the reachability game G P .
Then, Lemma 2 gives us the following.An immediate consequence of Theorem 2 is that, for mappings of fixed size, the domaincomplexity of the problem is in PTIME.For combined complexity, note that the memorykeeping approach adopted in G P is of a monotonic nature, i.e., once set, the bits corresponding to the satisfaction of ψ i and φ i cannot be unset.We use this insight to tighten our result and show that the presented construction can be in fact carried out in PSPACE.
Theorem 3. MBSD for target mapping specifications is in PSPACE for combined complexity and mapping complexity, and in PTIME for domain complexity.
Proof.Having shown PTIME membership for domain-complexity in Theorem 2, it remains to show membership in PSPACE for combined-complexity.Assume that P 2 wins the game G P and let σ P be a memory-less winning strategy for P 2. First see that every play ρ in σ P is finite.Therefore, since σ P is memory-less then every play ρ in σ P does not hold two identical V nodes.That means that wlog in every play, the [cd] index in every game node changes after at most 2 × |D A × D B | steps (since there are two copies in G P of the domains product-for P 1 and P 2).Next, we use the monotonicty property in G P .Specifically, between every two consecutive game nodes ) for some i in ρ, every index in [cd] can only remain as is or change from 0 to 1, therefore the bit index changes at most 2k times throughout the play.
Thus, we reduce G P to an identical game G ′ P that terminates either when reaching an accepting state (then P 2 wins), or after 2× |D A × D B |× 2k moves (then P 1 wins).Standard Min-Max algorithms (e.g.(Russell & Norvig, 2020)) that work in space size polynomial to maximal strategy depth can be deployed to verify a winning strategy for P 2 in G ′ P .Then on one hand if there is a winning strategy for G ′ P then there is a winning strategy for G P (the same strategy).On the other hand, if there is a winning strategy for G P then there is a memory-less winning strategy for G P that terminates after at most 2 × |D A × D B | × 2k moves, which means that there is a winning strategy for P 2 in G ′ P .
We continue our analysis of the case of MBSD target mapping specifications by exploring whether memory-keeping is avoidable and a more effective solution approach can be found.As the following result implies, this is, most likely, not the case.
Theorem 4. MBSD for target mapping specifications is PSPACE-hard in combined complexity (even for D A , D B as simple DAGs).
Proof Outline.We give a proof sketch, see Section 5.2.1 below for the detailed proof.
A QBF-CNF-1 formula is a QBF formula in a CNF form in which every clause contains at most one universal variable.The language TQBF-CNF-1, of all true QBF-CNF-1 formulas, is also PSPACE-complete.See Proposition 1 below for completion.We show a polynomial time reduction to MBSD from a TQBF-CNF-1.
Given a QBF-CNF-1 formula F , assume wlog that each alternation holds exactly a single variable.Construct the following MBSD instance P F .Intuitively, the domains D A and D B are directed acyclic graphs (DAG) where D A controls the universal variables and D B controls the existential variables, see Figure 1 for a rough sketch of the domains graph for a QBF Formula with universal variables x A 1 , x A 2 and existential variables x B 1 , x B 2 .The initial states are s A 1 for agent A and s B 1 for agent B. By traversing the domains in alternation, each agent can choose at every junction node depicted as s A i for D A or s B i for D B , between either a true path through ⊤ depicted nodes, or false path through ⊥ depicted nodes, thus correspond to setting assignments to propositions that are analogue to universal (agent A) or existential (agent B) variables.For example, by visiting s A 1 ⊤ , agent A satisfies a proposition called p A 1 ⊤ that corresponds to assign the universal variable x A 1 = true.The mapping Φ is set according to F where each clause corresponds to a specific conjunct.For example a clause (x are propositions in Prop A and Prop B respectively.An additional conjunct is added to ensure that Agent B does not stop ahead of time.Then a strategy for agent B of which path to choose at every junction node corresponds to a strategy of which existential variable to assign for F .As such, F is true if and only if there is a solution to the MBSD P F .

Detailed proof of Theorem 4
We first provide a detailed proof of Theorem 4. Then for completness we prove that the language TQBF-CNF-1, used in the proof, is PSPACE-complete.
Given a QBF-CNF-1 formula F with n universal variables , assume wlog that each alternation holds exactly a single variable.Construct the following MBSD instance P F .Intuitively for H ∈ {A, B}, the separate D H domains are DAGs, each composed of n + 1 major states s Back to the proof, obviously the construction of P is time-polynomial wrt |F |.Note that while the agents move in D A , D B , the only choices that the agent has are at every s H i , to decide whether to move through the true-path or the false-path.Also note that both agents always progress at the same pace.That is: agent A is in s A i iff agent B is in s B i .Also note that in every path that the agents take their respective domains, exactly one of s H i ⊤ or s H i ⊥ can be visited, thus at every trace formed either p H i ⊤ or p H i false are satisfied but not both.That means that ♦(p H i ⊤ ) ↔ ♦(p H i ⊥ ) is always true.Now assume that F is true.Therefore there is a strategy σ F for the existential player that sets F to be true.Then we construct the following strategy σ P for agent B: whenever agent A is at s A i and takes the true-path and thus satisfies p A i ⊤ (resp.false-path to satisfy Figure 1: A rough sketch of the domains in the reduction construction in Theorem 4. The initial state for agent A is s A 1 and for agent B is s B 1 . then set agent B to take the true-path and thus satisfy p B i ⊤ (resp.false-path to satisfy p B i ⊥ .Due to the mirroring between Φ and F , it follows that when both agents reach s H n+1 (and therefore the stopping-constraint is true), we have that every clause C in F is true and thus so is its corresponding conjunct ν C (recall that the subformula ♦(p A * ) is always true).Next, assume that there is a winning strategy σ P for P. Then similarly we construct a strategy σ F as follows.At every point s B i , whenever agent B takes the true-path (resp.false-path) set x B i = true (resp.x B i = f alse).Following σ P ensured all the conjuncts of Φ are true.Note that since the stopping-constraint is satisfied, Agent B reaches s B n+1 which guarantees that σ F is well defined for all variables.In addition, every clause C corresponding to a conjunct ν C must also be true.For example, if That completes the proof.
The PSPACE-hardness of TQBF-CNF-1 is not a hard exercise, for completion we bring a full proof.
Proof.TQBF-CNF is known to be PSPACE-complete (Garey & Johnson, 1979).Obviously TQBF-CNF-1 is in PSPACE, we show PSPACE-hardness.Given a QBF-CNF formula F , we transform F to a QBF-CNF-1 formula F ′ such that F is true if and only if F ′ is true.For that, we construct a formula F ′ from F as follows.We first add a fresh existential variable z i for every universal variable x i .In addition, conjunct F with clauses (x i ∨ ¬z i ) and (¬x i ∨ z i ) that their conjunction is logically equivalent to (x i ↔ z i ).Finally, in every original clause C of F we replace every literal x i with z i and every literal ¬x i with ¬z i .For the alternation order, we place the z i anywhere after x i (we can add dummy universal variables to keep the alternation interleaving order, as standard in such reductions).Since every original clause in F contains now only existential variables, we have that F ′ is indeed in the QBF-CNF-1 form that we described.Moreover, note that in F ′ every clause that holds a universal literal is of a size of 2.
Obviously, constructing F ′ from F is of polynomial time to |F |.Assume that F is true.Then there is a strategy σ F for choosing existential variables such that F is true.Then define a strategy σ F ′ that copies σ F , and for every choice for z i , echos the assignment for x i .That is set z i = true iff x i was set to true.Since every x i precedes z i , this can be done.Then such a strategy sets F ′ to be true.Next assume F ′ is true.Then there is a strategy σ F ′ for choosing existential variables such that F is true.Then set a strategy for σ F that just repeats σ F ′ while completely ignoring the assignment for z variables (this can be done since every assignment for z i in σ F ′ has to be the same assignment that was set for x i ).Again, it follows that such a strategy sets F to be true.Thus, TQBF-CNF-1 is PSPACE-complete as well.

MBSD for Tree-like Domains
We conclude this section by discussing a very specific tree-like domain structure.We say that a dynamic domain D = (S, s 0 , δ, λ) is tree-like if the transition relation δ induces a tree structure on the states, except for some states which may admit self-loops as their only outgoing transition (therefore such states would be leaves, if self-loops were not present).For this class of domains, the exponential blowup on the number of traces does not occur, as for every state s there exists only a unique trace ending in s (modulo a possible suffix due to self-loops).
Theorem 5. Solving MBSD for target mapping specifications and tree-like D A and D B is in PTIME for combined complexity, domain complexity, and mapping complexity.
Proof.Given an MBSD instance P tree with tree-like D A and D B , consider the two-player game structure G Ptree = (A, W ) where the game arena A is as described in Section 4.2.It is immediate to see that since D A and D B are tree-like, so is A, if we consider the edges defined by α and β (which reflect those in D A and D B ). Now, note that for every node (s, t) ∈ U in the arena A and for i = 1, . . ., k, we can easily check whether the unique play ρ of A that ends in (s, t) contains two (possibly distinct) nodes with indices j i and l i , such that λ A (s j i ) |= ϕ i and λ B (t l i ) |= ψ i .If that is the case, we call (s, t) an i-accepting node.Then, we define the set of accepting states as g = {u ∈ U | u is i-accepting, for i = 1, . . ., k}, and the winning condition as W = Reach(g).In this way, G Ptree is a reachability game, constructed in time polynomial in the size of P tree , and solvable in linear time in the size of G Ptree .Result then follows since P tree has a solution if and only if there is a solution to G Ptree .
As before, the combined and domain complexities are EXPTIME, for D A and D B described succinctly.

Solving MBSD with General Mapping Specifications
The final variant of mapping specifications that we study is of the most general form, where Φ can be any arbitrary ltl f formula over Prop A ∪ Prop B .For this, we exploit the fact that for every ltl f formula Φ, there exists a DFA F Φ that accepts exactly the traces that satisfy Φ (De Giacomo & Vardi, 2013).Depending on which agent stops, the problem specializes into one of the following: • if A stops: find a strategy for B such that every trace always visits an accepting state of F Φ ; • if B stops: find a strategy for B such that every trace eventually reaches an accepting state of F Φ .
To solve this variant, we again reduce MBSD to a two-player game structure G P = (A, W ), as in our previous constructions, then solve a safety game, if A stops, and a reachability game, if B stops.To follow the mapping as the game proceeds, we incorporate F Φ into the arena.This requires a careful synchronization, as the propositional labels associated with the states of dynamic domains affect the transitions of the automaton.
Intuitively, A models the synchronous product of the arena defined in Section 4, with the DFA F Φ .As such, the DFA first needs to make a transition from its own initial state q 0 to read the labelling information of both initial states s 0 and t 0 of D A and D B , respectively.This is already accounted for by q ′ 0 , in the initial state u 0 of the arena.At every step, from current node u = (s, t, q), P 1 first chooses the next state s ′ of D A , then P 2 chooses a state t ′ of D B , both according to their transition relation, and finally F Φ progresses, according to its transition function η and by reading the labeling of s ′ and t ′ , from q to q ′ = η(q, λ A (s ′ ) ∪ λ B (t ′ )).
For the winning objective W , define the set of goal nodes g = {u ∈ U | u = (s, t, q) such that q ∈ acc}.That is, g consists of the nodes in the arena where F Φ is in an accepting state.Then, we define W = Safe(g) (to play a safety game), if Ag stop = A, and W = Reach(g) (to play a reachability game), if Ag stop = B.
The following theorem states the correctness of the construction.
Theorem 6.There is a solution to P if and only if there a solution to G P .
Proof.Let Ag stop = A (the case for Ag stop = B is similar), thus G P = (A, Safe(g)).By Definition 3, P has a solution σ iff for every trace τ A of D A , we have that λ(τ A , σ(τ A )) |= Φ.That is, λ(τ A , σ(τ A )) is accepted by F Φ , i.e., the run on F Φ of λ(τ A , σ(τ A )) ends at an accepting state q ∈ acc.Due to the strict one-to-one correspondence between the transitions of G P with those of D A , D B and F Φ , we can simply transform σ to be such that σ : V + → U .Hence, every play ρ = ρ 0 ρ 1 • • • ρ n of A compatible with σ is such that ρ k ∈ g for every even k.0 ≤ k ≤ last(ρ).By definition of safety game, this holds iff σ is a winning strategy of G P = (A, Safe(g)).
Clearly, the constructed winning strategy σ from the reduced game G P is a solution to P.
Finally, we obtain the following complexity result for the problem in its most general form.
Theorem 7. Solving MBSD for general mapping specifications can be done in 2EXPTIME in combined complexity and mapping complexity, and in PTIME in domain complexity.
Proof.Constructing the DFA F Φ from the mapping specification Φ is in 2EXPTIME in the number of sub-formulas of Φ (De Giacomo & Vardi, 2013).Once F Φ is constructed, observe that the game arena A is the product of D A , D B and the DFA F Φ , which requires, to be constructed, polynomial time in the size of Moreover, both safety and reachability games can be solved in linear time in the size of A, from which it follows that the MBSD problem for general mappings is in 2EXPTIME in combined complexity, PTIME in domain complexity, and 2EXPTIME in mapping complexity.
Mimicking has been recently studied in Formal Methods (Amram, Bansal, Fried, Tabajara, Vardi, & Weiss, 2021).In (Amram et al., 2021), the notion of mimicking is specified in separated GR(k) formulas, a strict fragment of ltl.This makes the setting there not suitable for specifying mimicking behaviors of intelligent agents, since an intelligent agent will not keep acting indefinitely long, but only for a finite (but unbounded) number of steps.Moreover, the distinctions between the two systems and the mimicking specification were not singled out.This makes it difficult to provide a precise computational complexity analysis with respect to the systems, and the mimicking specification, separately.
A strictly related work, though more specific, is Automatic Behavior Composition (De Giacomo, Patrizi, & Sardiña, 2013), where a set of available behaviors must be orchestrated in order to mimic a desired, unavailable, target behavior.That work deals with a specific mapping specification over actions, corresponding to the formal notion of simulation (Milner, 1971).This current work devises a more general framework and a solution approach for a wider spectrum of mapping specifications, in a finite-trace framework.
Finally, we want to notice that our framework is similar to what studied in data integration and data exchange (Lenzerini, 2002;Fagin, Kolaitis, Miller, & Popa, 2005;Giacomo, Lembo, Lenzerini, & Rosati, 2007;Kolaitis, 2018), where there are source databases, target databases, and mapping between them that relate the data in one with the data in the other.While similar concepts can certainly be found in our framework, here we do not consider data but dynamic behaviors, an aspect which makes the technical development very different.

Conclusion and Discussion
We have studied the problem of mimicking behaviors in separated domains, in a finite-trace setting where the notion of mimicking is captured by ltl f mapping specifications.The problem consists in finding a strategy that allows an agent B to mimic the behavior of another agent A. We have devised an approach for the general formulation, based on a reduction to suitable two-player games, and have derived corresponding complexity results.We have also identified two specializations of the problem, based on the form of their mappings, which show simpler approaches and better computational properties.For these, we have also provided illustrative examples.
A question that naturally arises, for which we have no conclusive answer yet, is to what extent domain separation and possibly separated types of conditions can be exploited to obtain complexity improvements in general, not only on the problems analyzed here.In this respect, we take the following few points for discussion.
We first note that the framework in (Amram et al., 2021) can be adapted to an infinitetrace variant of MBSD, with target mapping specifications of the form Φ = k l=1 ( n l i=1 ♦(ϕ l,i ) → m l j=1 ♦(ψ l,j )).The results in (Amram et al., 2021), which build heavily on domain separation, can be tailored to obtain a polynomial-time algorithm for (explicit) separated domains in combined complexity.In contrast, Theorem 4 in this paper shows that the finite variant is PSPACE-hard already for much simpler mappings.This gap seems to suggest that domain separation cannot prevent the book-keeping that is possibly mandatory for the finite case.Note however that Theorem 2 of this paper can be easily extended to specifications of the form Φ ′ = k l=1 ( n l i=1 ♦(ϕ l,i ) → m l j=1 ♦(ψ l,j )), yielding an algorithm of time polynomial in the domain size but exponential in the number of Boolean subformulas in Φ ′ .
A second point of observation is the following.While the result in Section 6 provides an upper bound for mappings expressed as general ltl f formulas, one can consider a more relaxed form Φ = i≤k (φ i → ψ i ) where each φ i (resp.ψ i ) is an ltl f formulas over Prop A (resp.Prop B ) only.While still PSPACE-hard (see Theorem 4), it is tempting to use some form of memory keeping as done in Theorem 2 to avoid the 2EXPTIME complexity.The challenge, however, is that every attempt to monitor satisfaction for even a single ltl f sub-formula, whether φ i or ψ i , seems to require an ltl f to DFA construction that already yields the 2EXPTIME cost.Another approach could be to construct a DFA separately for each ltl f sub-formula, then combine them along with the product of the domains and continue as in Section 6.This however involves a game with a state space to explore that is the (non-minimized) product of the respective DFAs, and is typically much larger than the (minimized) DFA constructed directly from Φ (as observed in (Tabajara & Vardi, 2019;Zhu, Tabajara, Pu, & Vardi, 2021)).Moreover, in practice, state-of-the-art tools for translating ltl f to DFAs (Bansal, Li, Tabajara, & Vardi, 2020;De Giacomo & Favorito, 2021) tend to take maximal advantage of automata minimization.How to avoid the DFA construction in such separated mappings to gain computational complexity advantage is yet to be explored.

Theorem 2 .
MBSD with target mapping specifications can be solved in time polynomial in |D A × D B | × |Φ| × 4 k , with Φ the mapping specification and k the number of its conjuncts.Proof.Given an MBSD instance P with target mapping specifications, we construct a reachability game G P as shown above, which has size |D A × D B | × 4 k and construction time polynomial in |D A × D B | × |Φ| × 4 k .Result then follows from Lemma 2 and from the fact that reachability games can be solved in linear time in the size of the game.
Agent H can move only to s H i+1 through exactly one of the following paths: a directed true path that visits a vertex s H i ⊤ labeled by {p H i ⊤ } or a directed false path that visits a vertex s H i ⊥ labeled by {p H i ⊥ }.From s H n+1 there is only directed self-loop.Thus the choice of which path to take means whether the subformula ♦(p H i 2} every node s H i ⊤ has a proposition p H i ⊤ and every node s H i ⊥ has a proposition p H i ⊥ .The stop agent Ag stop is set to B. The mapping specification is as follows: ⊤ ) is satisfied (that corresponds to setting x i = true), or ♦(p H i ⊥ ) isn't satisfied (that corresponds to setting x i = f alse).Finally label s A n+1 with {p A * } and s B n+1 with {p B * }.Then ♦(p H * ) is true in every game played.Then the MBSD P F is constructed as follows.The domains D A , D B are in Figure 1 where s A 1 , s B 1 are the initial state for agents A, B respectively.s A 3 has a proposition {p A * } and s B 3 has a proposition {p B * }.For every H ∈ {A, B} and i ∈ {1,