Sony Computer Science Lab. Tokyo, Japan
Tel: +81 3 5448 4380 email rodger@csl.sony.co.jp
Olof Hagsand, Martin Stenius
Swedish Institute of Computer Science, Kirsta, Sweden
Tel: +46 8 752 15 00 email olof@sics.se
Abstract Building a distributed virtual environment that scales to many participants in low bandwidth, high latency networks is a technical challenge. The key issues are maintaining acceptable performance in the face of high latency links, and maintaing consistency of shared world data between multiple participants. This paper describes our overall architecture that enables us to build such a wide area shared virtual environment targeted to the Internet. The architecture relies on spatial partitioning of the shared scene to reduce communication, replication to hide latency, and group communications to maintain replica consistency. This paper discusses the generic architecture, the key issues that must be solved and then presents two implementations of that architecture and gives performance results from one of those implementations.
Introduction
The Virtual Society(VS) project is a long-term research initiative that is investigating how the future electronic society will evolve. Recent trends in communications, audio-visual technology and in computer devices points to a synergy that will create a comprehensive electronic network that will be ubiquitous in the home and office place. Such an infrastructure will allow easy access to media and data from a variety of sources and will deliver this information to the user wherever they may be. Further such an infrastructure will support much higher degrees of interaction than is currently available, allowing users not just to be consumers of information, but be able to produce information and to interact with information sources. We believe that this infrastructure will also provide a powerful basis to allow users to interact and to carry out useful work with others even when they are geographically remote.As a first step in this investigation, we have chosen to explore the 3D spatial metaphor as a basis for a shared information and interaction space. Our choice of a 3D spatial metaphor is based on our believe that such a metaphor is an attractive "natural' environment within which users can interact. Rather than strive to find new metaphors to present data, we mimic the world in which we live. While it is clear that not all interaction needs or benefits from a 3 dimensional setting, we believe that such a setting will allow a number of activities which are cumbersome and unnatural in the current 2 dimensional approach offered by most computer systems to be carried out in an easier and more efficient manner.
Thus, our goal has been to build a support infrastructure that will allow many users to participate in a shared, interactive 3D world. Such interaction will include the ability to see each other, talk to each other, visit locales with each other and work with each other. Our proposed system has elements of a computer-supported cooperative work (CSCW) environment, a virtual reality system and an on-line chat forum. Such systems have already been explored in a number of experimental research platforms, however in the majority of cases the work has been confined to high bandwidth communication networks supporting small numbers of users. Our work differs in that our initial goal has been large scale systems capable of supporting many users geographically dispersed and interconnected through low bandwidth, high latency communication links.
This paper discusses how we have chosen to architect a large scale virtual environment and the techniques we have adopted to address the issues that arise from scaling such an environment. It concentrates on two key issues, replication to allow scaling and its attendant problem of consistency, and the issue of latency hiding in a low bandwidth, high latency system.
In both cases we exploit the spatial nature of our target system to allow us to adapt existing research work to help us address these issues.
This paper is structured as follows, in section 2 we outline the basic components needed to build a shared virtual environment, section 3 discusses the issues that arise as we try and scale such an architecture, namely that of consistency maintenance and latency hiding. Section 4 details the architecture we have adopted to deal with these issues, which exploits the spatial nature of our target system to allow us to reduce the data we must maintain in a coherent manner and the degree of consistency that we need to impose on this data. Section 4.3 overviews the group communication mechanism we use to support our distributed architecture. Section 5 introduces two systems we are working on, the CyberPassage system from Sony and the DIVE system from SICS and discusses an implementation of our architectural model and the performance results that we have obtained. Section 6 contrasts our work with other work in the area and lastly, section 7 concludes and discusses our future plans.
A simple distributed VE architecture
A naive and basic infrastructure for a shared 3D world is simple; it consists of a database of objects that exist in the world, a set of tools to populate that database and a set of devices that display the contents of the database. The display device doubles as an input device and allows users to navigate through the world and to interact with other users and objects in the world. To achieve this it requires some form of communication that will allow the display devices to access the database and to propagate user input to the database. Such an architecture is shown graphically in fig. 1.Since one of our main goals is to support a shared world, one of the key differences between our work and existing 3D platforms is that each user is represented in the 3D world and that each user sees a representation of all other users in the world. In a system that scales to many hundreds of users, supporting each user as a dynamic entity roaming the 3D world is a significant technical challenge.
A further issue is that we wish to use our system in a range of settings. At one end the system should permit the support of CSCW between researchers in geographically remote research labs where high end graphics machines and high bandwidth communication are available. At the other end we wish to allow users to shop with friends from the comfort of their own homes.
The major components of such a system are:
- The display device can range from a low cost consumer electronics device up to a high end graphics workstation.
- The communications link is of prime importance to the performance of the user device. In a consumer setting, current technology constrains us to a maximum bit rate of 28k bits per second, whereas a modern research lab has access to a Mbit communication link.
- The server maintains the database of scenery objects that make up the world and users who are navigating through those scenes. It delivers the contents of the database to the display devices as and when needed.
Figure 1: A simple architecture
If our goal was to support a limited number of interacting users, then the simple architecture outlined above would suffice. However, because we wish to support many thousands of users we need to ensure that the architecture will scale.
Scaling issues
Our concern when scaling is twofold, firstly scaling to allow us to support large numbers of users, and secondly, scaling to allow us to support those users in widely dispersed geographical locations.To allow us to support large numbers of users we are primarily concerned with designing a system whose computational and communication costs do not rise linearly, or worse with the number of users. In a low bandwidth, high latency environment, communication costs dominate. Communication in our system is derived from the need to access the database for world data, update the data base as a result of user interaction, and propagate those updates to the world participants. Thus our architecture must allow us to constrain these communication costs.
The second issue is caused by geographical dispersion of users accessing the world data. Even if we design an architecture that constrains communication costs as a function of users, we still need to send data from the database to the users across high latency links. Thus we also need to ensure that our architecture allows us to reduce these costs.
Database partitioning through aura's
To reduce sharing we partition the shared world according to an aura. An aura is a notion that has evolved out of work in the area of computer-supported cooperative work and defines a sphere of interest associated with a user [5]. In this previous work, auras have been used as a spatial notion to support interaction models. We have adapted their use to one whose main purpose is to define the degree of sharing and, where necessary, to reduce sharing.Objects in our system exist in a virtual world. Each world defines a virtual space captured using a 3D co-ordinate system. Each object specifies a dynamic aura that represents the portion of the virtual space in which it is interested. A separate unit, an Aura Manager (AM) constantly monitors objects as they move around the shared world and informs objects when other objects collide with their aura.
Figure 2: The abstract architecture
In figure 2, we can see a simple system with three user objects and two scenery objects. Object 1 and object 2 are in each other's auras and so have a communications link between them.
In essence, we use the notion of aura to partition the database, and the Aura Manager to track the database partitions.
In our current model, we use auras as a means to control spatial interaction. However, an aura can be concerned not simply with space but also with aural or sensory interaction. Thus a user may have a large visual aura but a small auditory aura. In addition, the aura may be dynamic. For example, when a user enters a crowded room, then it is likely that they would wish to reduce their visual aura to cut down on the amount of information they need to be concerned with.
This second use of the aura is at the user level. However, at the system level such an approach is directly applicable because, for engineering reasons, as a user enters a crowded locale, we wish to reduce the degree of interaction to minimise the amount of consistency that must be supported.
This use of an aura allows us to partition the world database so that any one participant is only interested in a subset of that world database. Since they are only interested in a subset, then they only need to receive information about that subset and not all objects in the database. Hence we can break the linear relationship between users and communication.
Latency hiding
The second major problem we face when building a shared virtual world is the ability to ensure that interactions work in real time. By this we mean that our maximum communication time is bounded by a user perceived notion of interaction delay. It has been shown that a delay in the `action-result' cycle of more than 250msec will deter users from using the system [1]. The user may initiate an event, such as selecting an object in a scene, and expects the effect of that action to be visible within a bounded time.We have already started to address this problem with our use of auras, i.e. since we have reduced the number of participants that must take part in any data update we have reduced the time needed to reach consensus on the consistent state. However, this is not enough since we have still to make remote requests to access state. To address this problem we have adopted a distributed systems approach, i.e. we replicate the world database. By making a copy of the world database geographically locally to users we reduce the communication overhead needed to access data.
Distributing data through replication is a well known technique and has been used in many different settings. In all cases, the main problem arising from this approach is the maintenance of replica consistency. Obviously, as we make multiple copies of data we are forced to ensure that the copies hold the same values.
When dealing with data we can usefully class it into three major categories;
- Static data. This is data which is read only and is never changed.
- Dynamic data whose current value may be 'out of date'. This type of data changes over time, but it is acceptable for accesses to this data to return old values.
- Dynamic data that must always be 'up to date'. Accesses to this type of data must always return the most recently updated value.
Proxies as software caches
In such cases we can make use of the general notion of proxies. A proxy is a local representative for a remote entity. A proxy can be viewed as a local software cache and allows us to greatly increase the speed of access to certain data types. A proxy also represents a copy of the replicated world data.When an object is informed by the AM that it is in the aura of another object, it creates a communication link to that object by creating a proxy for itself and giving it to the other object. Returning to figure 2, object 1 creates a proxy for itself and gives it to object 2. Object 2 then uses the local proxy as if it was the remote object. Any queries or operations are actually performed on the proxy.
This approach allows object 1 to define which information is cached in the local proxy, which information is always held in object 1, and more importantly, how the cached information is updated. Returning to our three categories of data, static data is cached into the proxy and any access to such data always uses the cached value. For data that is dynamic but may be out of date then it is possible to always return the currently cached value and for the proxy to query the actual data value at periodic intervals. For dynamic data that must be up to date the request to a local proxy may then result in the proxy actually requesting the data from the real object via a remote communication.
In a more complex example where object A gives out proxies to several other objects, the consistency algorithms are actually implemented as algorithms to maintain the consistency of information managed by the proxies. Depending on the actual algorithm used updates and reads to the proxy will result in changes and values being propagated between proxies and the actual object.
Distributed consistency
The fundamental model presented by the VS platform is one of a shared 3D space. Such a space, because it is shared, must be seen consistently by all users of that space. Thus any actions that occur in the shared space must be propagated to all participants in that space. A simple example serves to illustrate these points. Consider a virtual shop with two customers (A and B) who are physically at home, one in Tokyo and one in Hong Kong. When A enters the shop that B is already in, then B needs to see A and A needs to see B. If A is holding and examining an article, then it is required that B is not able to take that article from A (conflict) or that a copy of the article is made available for B to examine (conflict resolution). Lastly if A then shows B the article and asks for B's opinion, it is necessary that the request for an opinion arrives after A has shown B the article. Otherwise B will be asked for an opinion on an article that they has never seen !As discussed above, to address the issue of scaling, and its associated latency problem, we are forced to replicate data. Maintaining these replicated databases (or proxies) in a coherent manner in the face of message failures and unbounded delays is a significant problem and one on which there has been much work in the research community. This includes work on message based systems [2][4] and work that focuses on distributed memory [3] [10] [11]. However, in most cases the work either does not scale, or will scale only if we accept a performance overhead or significant delay.
We have started to address the scaling issue by partial replication through spatial auras, however we still need to provide conistency mechanims. Our basic infrastructure is based on a group communication protocol.
Group communications for consistency support
The actual consistency mechanism used between proxies and the actual object depends on the requirements of the data. However, in all cases the algorithm relies on a group communications package. Groups define an endpoint for communication that abstracts from the individual members. Sending to a group will propagate the message to all members of the group and may elicit one or more replies. The actual source of the reply may be unknown and not specifically mentioned in the request.Consistency support is built by using groups to define who should be consistent, and then using group communications to send the updates to all members of the consistency group. As a change is made, it is propagated to all members of the group, who then make the change locally. Consistency is guaranteed by using a combination of message sending and locking.
Figure 3: Using groups for differing consistency requirements
In figure 3, assume that objects 1 and 2 represent clients that are connected on a high-bandwidth LAN link, and that object 3 represents a client connected via a low-bandwidth link. In this case, we would define two consistency groups. The first, Group 1, would be a strictly consistent group in which all updates to an object would be propagated to the proxies immediately. This would be possible because the communication delay and cost in a high-bandwidth LAN environment would be low. However, since object 3 represents a client connected over a low-bandwidth WAN we wish to reduce communication costs to this client. Hence Group 2 would be a weakly consistent group in which updates would be propagated at fixed time intervals for example.
The semantics of the send are crucial to the consistency support; the stronger the semantics, the less work has to be carried out by the consistency algorithm. There are many message models that have been researched in the literature. We have decided to adopt a simple subset that includes the following two message semantics:
- send to all, no delivery guarantee.
- send to all, source sender's messages are ordered but are unordered with respect to other senders.
Once the basic group notion is available, we are able to map the notion of auras to groups. Each object's aura is represented by a single group. When another object comes within that aura it joins the group representing the aura. At this point, any messages sent to the group will be sent to all members who are in the group, i.e. within the aura.
In conclusion, we have proposed an architecture that uses a spatial notion, the aura, to partition shared data and reduce communication costs. To attack the problem of communication latency we replicate the data that we need to share. This leads to better performance but requires consistency mechanims to ensure that replicas are consistent. We support consistency using a basic multicast protocol that supports a group communications model. We map the spatial notion of aura to groups to allow us to manage the group mechanism. In the following section we discuss two implementations of this basic model, each of which concentrates on different aspects of the model.
Implementation of the architecture
We have two ongoing developments, the first, known as the CyberPassage system is a wholly Sony development within our research labs. The second is a longer term research project, carried out jointly with the Swedish institute of Computer Science (SICS) using the DIVE [8] platform developed at SICS. Our joint project is known as the Wide Area Virtual Environment (WAVE) project.In this section we will briefly introduce both developments, outline their differences, and present performance figures from the WAVE project.
CyberPassage
CyberPassage is the name of a suite of software developed by Sony Research Labs to support shared 3D worlds on the Internet. From the outset the CyberPassage system has been targeted towards low bandwidth, high latency networks. This contrasts with the DIVE system described below.CyberPassage consists of a PC based browser, a PC based authoring tool called CyberPassage Conductor, and server system called CyberPassage Bureau.
The CyberPassage architecture is an implementation of the abstract architecture discussed in this paper. The implementation has elements of a client server and a peer to peer architecture. The clients in the system are the PC based viewers. We assume they are connected on low bandwidth, dial up lines. They connect to a local server which holds information aboyt clients connected, ie their position and their aura and some data attributes. The server architecture is a replicated peer to peer one, with each server communicating with others to inform them of updates originated by its connected clients. We choose this hybrid architecture, because we did not want to run sophisticated consistency algorithms across dial up lines. Rather we run them between servers which we assume are well connected, and serialise groups of clients using the server.
Figure 4: CyberPassage architecture
At the time of writing (footnote 1) the CyberPassage Bureau uses point to point links and strict consistency algorithms at the server level. The light weight group mechanism and reliable multicast alluded to in the paper has been built with the WAVE project, described below, and will incorporated into the CyberPassage Bureau during the coming months.
Rather, the CyberPassage system has concentrated on the client server aspects of the system and addressed the most significant problem; the issue of optimising communication across low bandwidth dial up links with data rates typically between 9.6Kbps and 28.8Kbps. We have done some in two ways, firstly we have designed a high speed protocol for 3D data. Secondly, we have developed a general model of off loading computation to clients so that events may be propagated between the server and clients rather than the results of the events, i.e. state.
Low-bandwidth communications link
Our approach here has been thre-fold; firstly we have developed a client-server protocol, the Virtual Society Client Protocol (VSCP), that is optimised to support 3D transformations with minimal data exchange. Thus for example, we represent a rotation event on a 3D object with a 42 byte packet. However, because our system is open and dynamic, it is necessary that the protocol supports application specific communication between clients and the server. To do this, the VSCP protocol has been designed in an object oriented manner so that application specific messages are derived from a generic message. More details of this work can be found in [].Secondly, we partition data between the client and the browser. Typically static scene data, or data whose consistency is not important, is held at the client. Changes to this data, if any, are sent infrequently. Dynamic data is managed by the servers who replicate and maintain consistency.
Lastly, we have off-loaded movement computation to the clients. Our model uses the notion of a movement behaviour, an algorithm that describes characteristics of an object's acceptable movement patterns. To reduce server to client communication to a minimum, these movement behaviours are invoked at the desired frame rate at each client site. The net result of this is that each client performs a local movement calculation in parallel, but because they are all running the same algorithm, they reach consensus within a certain bounded degree. When a movement vector is transmitted from the server to all clients, each client performs a local correction, hence reaching a globally consistent state.
The framework for the support of movement behaviours is based on a script engine which is resident at each local client. Scripts to describe movement are either loaded from local storage or downloaded from the server. Obviously, the use of scripts is not restricted to describing movement but can be used for any arbitrary local computation. In particular we use scripts to drive user interaction dialogues. This script model has been applied to the VRML1.0 language. We have proposed these extensions to the VRML standards committee and this model is now the basis of the draft VRML2.0 standard.
The CyberPassage system is fully functional and supports distributed shared worlds, multi-user chat features and shared behaviours. It is based on the WWW and VRML standard, freely downloadable (footnote 2) and in daily use by many people.
The WAVE project
The WAVE project is investigating wide area virtual environments. It uses as its experimental platform the DIVE system. DIVE is an extremely sophisticated distributed virtual environment [8] that has been under development at SICS since 1990. DIVE was originally targeted at LAN's and high end workstations and adopted a fully replicated, peer to peer approach. Although DIVE is capable of working in a lower bandwidth wide area network such as the Internet, it used some heavyweight mechanisms that restricted its usability. One of the goals of the WAVE project has been to experiment with mechanisms to make DIVE more amenable to networks such as the Internet.In particular, the WAVE project has investigated light weight groups, reliable multicast and spatial partitioning within the DIVE system.
In the following section we present a distributed application and associated performance results from the WAVE project that uses these techniques.
Lightweight group experiments in DIVE
Our work within WAVE has been targeted at the multicast and group communication aspects of our design. In particular we have implemented a lightweight group mechanism that supports the aura model within dive an used it in experiments on the Mbone.DIVE differs from CyberPassage in that it is a pure peer to peer architecture. There is no notion of client and server, rather each client acts as local server. Each client holds a replica of the database and works with other clients to maintain the consistency of that database.
Dive light weight groups
In its original form, DIVE worked at the granularity of a 'world'. A world was an application abstraction but typically was coarse grained, for example consisting of all objects in a virtual city scene. Associated with the 'world' was a group. Browsers viewing the world joined the group. Each group member would hold a full replica of the 'world'. Thus DIVE replicated the entire database and although capable of working in the Internet suffered from performance degradation with more than a small number of participants in a shared world. To address this issue DIVE has been extended with light weight groups as a mechanism to partition the database.The details of the light weight groups can be found in []. Briefly, DIVE implements an object hierarchy where a world scene will consist of many entities, each of which is part of hierarchy rooted at a special object, the world entity. The lightweight group mechanism allows an application writer to associate a group with any entity in the hierarchy. All objects below that entity would belong to the group. If another group was found further down in the hierarchy, then objects below that belong to both groups.
Messages can be sent to light weight groups and will be delivered to any browser that has explicitly joined that group. Thus a browser can join a selection of groups associated with objects that interest it, and hence only partially replicate the world database.
Given this basic mechanims, then the next step is to provide a control entity that tells browsers which objects are important, and so which groups it should join. In our experiments we have built an Aura Manager to do this job. The Aura Manager has a copy of the entire database, it tracks which browsers are where in the scene and tells them which groups to join as they navigate around the scene.
Experiment description
In our experiments of the light weight group mechanism we have built a small demonstration application that consists of a simple shared scene. Within the scene is a house and some trees. Inside the house is a table with a small toy car that drives around on the table.The world scene, including trees and house is associated with one light weight group, the interior contents of the house, including the moving car have another group associated with them.
There are three components, the scene application that manages the scene, including the house and the car. The browser application that sees the scene and joins groups as needed. The Aura Manager who has a full replica of the scene database.
The user navigates via the browser around the scene. When the user first enters the scene, the Aura manager instructs them to join the general scene group. When they do, the underlying group join protocol requests a copy of the objects associated with that group. The application object will send them information about the general scene, the trees and the house. However, the contents of the house are not sent.
Independently, the application is controlling the contents of the house and in particular the car object causing position update messages to be sent continuously to the group associated with the house contents. However, because the browser has not joined the house contents groups it does not receive these messages.
As the user navigates closer to the house, then when they enter the aura of the house, the Aura Manager tells them that they have collided with the house aura and sends the group name of the contents group. The browser then joins the house contents group and requests the objects associated with this group. At this point the user browser begins to receive messages destined for the house contents group and will see the car driving around the table.
As the user moves away from the house, they are informed by the Aura Manager that they have left the house aura and will leave the associated group, therefore no longer receiving updates associated with the house contents.
Performance results
In the graphs below we can see the results of this mechanism. In the first graph we see the case of the original DIVE system where only one group is associated with the scene. As such, we see that there is a steady communication from the house (amhouse) to the browser (amdiva) of approx. 18 msg. per sec. This represents the update messages for the car object. The messages in the opposite direction, from the browser to the house are navigation movements. The graph is a time series over a period of 5000 seconds in which the browser performs a series of movements, entering the scene, moving towards the house, away from the house and back towards the house.Figure 5: No lightweight group performance
Figure 6: Using lightweight groups: Performance
In the second graph we see the results of using the light weight groups. In this case we see that initially the message traffic from the house (amhouse) to the browser (amdiva) is zero. After approximately 100 seconds, the browser navigates towards the house and is told to join the house contents group by the Auara Manager After which it receives a the car movement messages of approx. 18 message per second. When the browser moves away from the house the message traffic falls back to zero before rising when the user navigates back towards the house.
The initial peaks of approx. 40 messages per second are caused by the state transfer as the browser request information about the house contents. This information is actually sent by all group members causing message replication because both the house application and the Aura Manager reply. This is a design feature of the group join protocol and will be removed in future systems.
In conclusion, although the application is trivial, it clearly shows the performance advantage of the lightweight group mechanism. By using light weight groups we are able to maintain partial replicas and so reduce the message traffic needed to maintain consistency. The partial replicas are driven by the aura notion which allows us to use a spatial notion as a basic mechanims to control sharing.
Related work
There are several projects that are looking at the issues of distributed VE's. We can identify two groups, those that are concentrating on supporting sophisticated group interaction models suitable for such work as CSCW, and those that are interested in large scale distributed simulations.In the former class are systems such as MR[12], Bricknet[15] and MASSIVE[16] which all exhibit a certain degree of similarity.
MASSIVE is targeted at shared conferencing with wide area participants. Current versions of MASSIVE only support limited numbers of users (tens) and do so by using point to point communication links built as a result of spatial proximity based on an Aura. The MASSIVE system also uses the notion of the Aura but uses it to implement a spatial model rather than as we do, to define consistency groups. In addition, MASSIVE does not use communication groups but is currently built on point to point communication (footnote 3)
Aviary[9] although more concerned with immersive VR type applications and tightly coupled distributed platforms has a number of similar techniques to our system. In particular the use of the Environment Database (EDB) to manage collision detection and their model for splitting the EDB when loads are high is identical. However, Aviary has limited support for replication and uses a point to point communication model.
NPSNET is an example of the second class and one which has explored many of the issue of large scale interaction. Due to the target application set, distributed battlefield simulation, NPSNET has concentrated on different issue from our work. In particular, their main concern with respect consistency is position updates of battlefield units. They have adopted a best effort approach to distributed consistency which relies on the DIS communication library [14] to distributed position updates. Recent work has experimented with large scale multicast via the mbone but again have taken a best effort approach to consistency relying primarily on the stateless nature of their objects[18]. In addition, they have used a geographic approach to define multicast groups whereby the world is partitioned into hexagonal areas each associated with a multicast group. In contrast we have adopted a object centric approach to multicast groups based on a spatial aura and allow these auara to grow and contarct according to application needs.
Our use of groups and multicast can be usefully contrasted with both MASSIVE and NPSNET which have proposed mapping groups not to dynamic sets of related objects, but to spatial areas. NPSNET for example envisages using octagonal regions and assigning a multicast group to each region. Participants that enter a particular spatial area will join the associated group and receive all broadcasts to that group. While this is a simple approach it suffers from both the scaling problem and the false sharing problem in that spatial areas that are significantly used will have many participants.
Conclusion and future plans
Building a scalable distributed shared virtual environment is a technical challenge. We have adopted a hybrid approach using a spatial notion of Aura to reduce the degree of sharing and mapping that to a group mechanism to reduce the actual communication costs associated with consistency maintenance.Once the group mechanism is in place then we provide a framework in which different consistency mechanism can be implemented to allow application the flexibility to choose which guarantee they wish.
Ongoing work is investigating the possibilities of further exploiting the spatial model to drive these consistency mechanism. In particular, by using the graphics notion of Level of Detail (LOD) we can map spatially remote objects to low LOD's and thus groups supporting weak consistency, higher LOD's have higher degrees of consistency.
Apart from our ongoing development of the distributed platform, we have two threads to our future plans. The first is product oriented and involves collaboration with business units in Sony to transfer this technology into products. We are particularly interested in how the platform can be used in an Internet setting, and in how it can be used in the next generation interactive TV based on broadband communications to the home.
The second area is to generalise the work presented here to provide a general purpose replicated object package that works in a variety of communication settings and is adaptive, i.e. is capable of adjusting the degree of consistency provided according to both application requirements and infrastructure facilities. This is based on some of our work in the Apertos reflective operating system project [7].