Economics is one of the most important factors governing the assimilation and adoption of new technologies. In this information era computing and telecommunication networks are growing at an unbelievable rate. Therefore, we will study the impact of economics on the development and advancement of network technology. Network technology can be divided into three main components - Users, Network, and Services. In this paper, we will address the economic interactions between and within these components, and will focus on pricing, cost and settlement issues concerning the Network. In particular, we will discuss various pricing models of Internet access service as well as the feasibility of implementing those models in the existing infrastructure. We will investigate the settlements among network access providers. among infrastructure providers, and between network access providers and infrastructure providers. We will provide a few case studies to illustrate the economic interactions described. In each of the studies, we will comment on the viability of the company or the system. Finally, we will conclude with a forecast of the trend of the network technology, and whether network computing will be economically feasible in the near future.
Economics is one of the most important factors governing the assimilation and adoption of new technologies. However, the lack of accepted metrics for economic analysis of the Internet is increasingly problematic, as the Internet grows in scale, scope, and significance to the global economy. In the past decade, we have all witnessed the Internet's rapid expansion, which has outgrown any other industry. Recently, it was recorded that Internet traffic is doubling every three months, and that the number of hosts has increased by 23% in the past six months [9].
We have entered an era dominated by network technology. The advancement of networking technology is bringing about the convergence of computing and communication technologies. [1] This convergence, including technologies such as television, telephony, and computers, has in turn stimulated the reach of the innovations of the Internet. Digital video, audio, and interactive multimedia are growing in popularity and increasing the demand for Internet bandwidth. However, there has been no convergence on the economics of the Internet. While advanced information and communication technologies make the network work, economic issues must also be addressed to sustain the growth cited above and expand the scope of the network.
How will economics affect the assimilation and adoption of the network technology and the growth of network service? We are going to investigate some of the important issues in this paper.
In this paper, when we talk about networking technology, we are referring to a wide range of networks. Some of the more noticeable ones are the Internet, the telephone network, and the cable TV network. The nature of most of the discussion in this paper is general to all networks. However, since issues concerning the Internet are most controversial, we will focus on the Internet, while making occasional comparisons to other existing networks.
The paper is organized as follows: In Part I, we will briefly talk about the history of Internet and will find out that pricing is an important issue for the network development. Then we will introduce the components of network technologies and our economic model for studying the economics of the network. For the purposes of our study, we divide network technology into four entities: Users, Network Access Providers (NPs), Infrastructure and Services. (NPs and Infrastructure make up the "Network".) Recognizing the immensity of the subject, we will focus our discussion on the Network and its relationships to Users and Services rather than giving a more shallow coverage on the whole subject. Nevertheless, all the issues involved in the big picture will be identified first and those concerned with the Network will be further discussed in the rest of the paper.
In Part II, we will give some background information on network economics. Due to the nature of the telecommunication and information service industries, it is not efficient to apply the classic economic practice of pricing at marginal cost. We will explain why pricing is necessary and present pricing algorithms in the context of network technology. In Part III, the economic issues of Network Access Providers are investigated. Different pricing schemes on network access will be introduced and compared, and the issues of settlements among NPs will be discussed. In Part IV, we will investigate the economic issues of the Infrastructure. Issues on cost and pricing of the infrastructure will be discussed, as well as the settlements between Infrastructure providers and NPs, and among different Infrastructure providers. We will also address the impacts of costs and pricing on the evolution of the future infrastructure. In Part V, we will look into some specific case studies and will illustrate how the issues addressed are undertaken in real life. We will conclude with the case study of network computer and comment on the viability of this upcoming wave. Finally, for the purpose of illustrating how the issues in this paper are related to the issues studied by other groups, we will discuss these interactions in Part VI A>.
Before we introduce the components of network technology, we will first briefly look at the history of the Internet. When the Internet was created decades ago, its pricing was a minor or nonexistent issue. Why is pricing becoming a necessary issue to address nowadays? By looking at its history, we can gain an idea of how the technology has been changing.
Brief history of the Internet
In the 1960's, in response to the nuclear threat during the cold war, the Advanced Research Projects Administration (ARPA) engaged in a project to build a reliable communication network. The network deployed as a result of this research, ARPANet, was based on packet-switching protocols, which could dynamically reroute messages in such a way that messages could be delivered even if parts of the network were destroyed. ARPANet demonstrated the advantages of packet- switching protocols, and it facilitated the communication among the research institutes involved in the project. As more universities were connected to the network, ARPANet grew quickly and soon spanned the United States. In the mid 1970's the existing protocols were replaced by the TCP/IP protocols, a fact that was facilitated by their integration into Berkeley UNIX.
In the 1980's the National Science Foundation (NFS) created several supercomputer centers around the country. The NFS also deployed a high-speed network based on Internet protocols to provide universities with remote access to the supercomputer centers. Since connection to the NFSNet was not restricted to universities with Department of Defence (DoD) contracts, the network grew dramatically as all kind of non-profit entities, as well as universities and research groups, connected to it. A nonprofit Michigan-based consortium, Michigan Educational Research and Industrial Triad (MERIT), managed NFSNet. Since Internet access was subsidized by the NFS and by the non-profit entities connected to the network, economic issues such as accounting, pricing and settlements were for the most part ignored.
As NFSNet grew and its potential became obvious, many for-profit entities wanted access to the Internet. Since the NFS did not want to subsidize Internet access for these private groups, it gave control of NFSNet to the nonprofit corporation Advanced networks and Services (ANS). ANS was created from resources provided by MERIT, MCI and IBM, and was free to sell Internet access to all users, both nonprofit and for-profit. Meanwhile, some for-profit backbone providers such as PSI and UUNET started selling Internet interconnection services. As the Internet became more commercialized, people began studying and experimenting with the Internet economics.
In 1995, ANSNet was sold to America Online, and a new commercial Internet replaced the NFSNet-based Internet. The new Internet consists of a series of network backbones interconnected at Network Access Points (NAPs). NFS is phasing out its subsidies of the backbones but still subsidizes four NAPs: San Francisco (PacBell), Chicago (Ameritech), Washington DC (MFS) and New Jersey (Sprint). The popularization of the Internet and the perception of an imminent convergence of voice, video and data networks provided impetus to the telecommunications deregulation in 1996. At the same time, it became even more obvious that such a network convergence will require a coherent system of settlements and pricing. With the different networks being able to provide the same (or similar) services, the old telephone and cable pricing structures may become inadequate, and new structures must be created to replace the old ones.
Network services in a broad sense include not only those products provided by the Internet but any kind of service that is provided by or cannot be produced without the presence of a network. For example, in addition to the great amount of information goods and electronic commerce activities on the Internet, phone calls and cable TV are other examples of network services. While there exist users (buyers or consumers) to purchase the services, there will be sellers (or producers) to provide them. In network terminology, these providers are called service providers.
Just as in any other market, there has to be a means of getting the product (or, in the case of a service provider who sells hard goods, information about the product) from the producers to the customers. In the context of networking technology, this means is called the network infrastructure. Some well known infrastructures include the telephone network, the cable television network, and of course, the Internet (which is mostly part of the network provided by telephone companies).
To simplify the terminology, we call network services simply Services. These Services range from information goods to phone calls as suggested above. Notice that in the context of Internet, we define Services to also include the information service providers so as to distinguish the network itself from the contents flowing on top of the network. Users are defined as the individuals who consume Services via the network. Network Access Providers (NPs) are defined as the companies that provide network access to Users and Services so that they can communicate. Finally, we define Infrastructure as the physical network infrastructure and its protocols to allow information exchange in the network. We will use these definitions throughout this paper.
Now we have identified four main components in the network services market (or, in other words, the four main components of the networking technology). They are: Users, Network Access Providers, Infrastructure, and Services. This breakdown gives us a hierarchy of four levels going from the user end to the information end, where the network, including the Network Access Providers and the Infrastructure, sits between the User and the Services. This paper presents the interactions of these components in the context of economics, emphasizing cost, pricing and settlement issues.
To aid the understanding of the relationships of the four components introduced above and the presentation of our following analysis, we will present our economic model of networking technology here. (Figure 1)
Figure 1: The Four Main Components of Network Technology. The infrastructure and the network access providers make up the "Network" in our model. |
Three parts of network technology - User, Network and Services - are put into a circle so that they touch (and this represents having interactions with) each other. The Infrastructure is "embedded" in Network Access Providers because they have no direct interaction with the Users and the Services. Access of (the bandwidth of) the network is "retailed" by Network Access Providers. For example, in the context of the telephone industry, the telephone companies are Network Access Providers which utilize the Infrastructure (i.e. the telephone lines) to provide Services (i.e. phone calls) to the Users (i.e. telephone customers).
As can be seen in Figure 1, there are four main kinds of economic interactions in the world of network technology as illustrated by the double arrows in the figure. An example would better serve the purpose of illustrating all these interactions. Say, there is a user called Jane. She is sitting in front of her computer at home, browsing the web and finding a birthday present for a young friend. After some time she finds a nice toy in ABC company's webpage and purchases it. Which entities were involved in this transaction?
The answer may not seem to be obvious, but in order for this transaction to have taken place, all the four components of network technology have played a part. Jane is, of course, the User. She can browse the web from home because she is using the connection service provided by a Network Access Provider - in this case the NP is, more specifically, an Internet Service Provider, or ISP. The ISP in turn charges her a price for gaining access to the Internet. The ISP can provide this service to Jane because it rents a part of the Internet Infrastructure in order to provide network access service to Users like Jane. The ISP has to pay the company who provides the infrastructure (most likely a telephone company in this example). The homepage of ABC company is on the web because the company pays another (or possibly the same) ISP for the connection to the Internet, in order to provide this electronic commerce as a Service. Finally Jane pays for this Service to buy the toy. As we can see, there are economic interactions between User and Network Access Provider, Network Access Provider and Infrastructure, Network Access Provider and Services, and lastly, User and Services.
Besides the economic interactions between different components, there may also be, especially in the case of Internet, economic interactions within a component. For example, there are settlement issues between different providers of the infrastructure in "pass through" traffic.
As suggested above, we can divide the economic issues of networking technology in terms of interactions between different components and interactions within a particular component. Now we are going to briefly list the issues in each category one by one. Since our emphasis in this paper is on the Network, we are not going to discuss in depth the interactions between Users and Services. For this reason, we will provide an overview of the general issues in this particular category below.
Network Access Providers <-> Users
Network Access Providers <-> Services
Network Access Providers <-> Network Access Providers
Infrastructure <-> Network Access Providers
Infrastructure <-> Infrastructure
User <-> Services
The economic issues between Users and Services seem straightforward, although they can get quite complex. On one hand, Users want to get Services. On the other hand Services want to gather information about the Users, in order to improve and customize their service, as well as to price-differentiate so as to extract as much customer value as possible.
There are three main kinds of Services available for consumption on the network, namely, electronic commerce [10], information goods [2] [3] [4], and software applications distributed over the network. Issues in this area mainly concern the pricing of the service, as well as the impact of the service to the way people live. Electronic commerce is argued to be the mechanism that minimizes transaction costs. Information goods, ranging from electronic journals to up-to-date stock market information, have controversial issues in pricing because the initial production cost is large but the incremental reproduction and distribution cost is essentially zero. This is also the case for networked software applications such as Netscape and Java applets. Because distribution cost is negligible, software producers have to derive a way to recover their initial production cost. Another interesting issue on networked software application is, should it be priced once at the first-use, or on a usage basis?
Lastly, the asymmetry of buyer and seller knowledge (the buyer learns information about the seller just from the transaction alone) leads to the issue of user privacy [11]: how much is personal information worth?
In industries that exhibit perfect competition, economic theory dictates that firms will end up pricing at marginal costs. In a perfectly competitive market structure there is a large number of suppliers, none of which is too large relative to the overall market, the outputs of these suppliers are homogeneous [5] and there are no barriers to entry. It is assumed that the industry exhibits diminishing returns to scale and that the fixed costs are relatively small. However, the telecommunications and information services industries require huge fixed costs in the deployment of their required infrastructure and they exhibit increasing returns to scale. Therefore, it is not efficient for them to apply the classic economic practice of pricing at marginal cost (which is close to zero).
Since uniform pricing at marginal cost is not efficient in this industry, suppliers must devise other pricing strategies. One such strategy is to employ differential pricing schemes. Different consumers of a certain product usually place a different value on that product and, therefore, the willingness to pay for the product varies across the consumer population. Firms try to extract as much of this value from the consumers as possible by using differential pricing schemes. The amount extracted is limited by the consumers' willingness to pay for the product. Differential pricing schemes can be divided into two-part tariff schemes and price discrimination schemes.
In a two-part tariff scheme, users are charged an attachment fee to connect to the network, and a usage fee for their incremental use of the network. The entry (attachment) should be set to cover the fixed costs of the network infrastructure, plus any consumer surplus derived from the attachment. The usage fee may be metered by time, packets, bandwidth used etc., and should also include the marginal consumer surplus derived from that usage.
In a price discrimination scheme, consumers are divided into segments and are charged according to the segment to which they belong. There are three types of price discrimination schemes:
With price discrimination schemes, profit-seeking firms will try to extract as much consumer surplus from each segment as possible. Each segment is charged an optimal price based on the estimated willingness to pay of that segment. For example, businesses who rely on telecommunication services are willing to pay (and therefore are charged) higher rates than individuals households. The established algorithm for determining prices is:
Lastly, when considering a network pricing structure, it is important to distinguish between four types of charges [6] that the above (and other) pricing schemes can apply:
Economic welfare
As mentioned before, differential pricing schemes are necessary in order to cover all (or most) markets, including small niche markets [8]. Non-differential pricing schemes, such as the $19.95 flat fee for Internet access, cannot be optimal. There are segments of the population that place a high value on Internet access and are willing to pay more than $19.95 and that surplus is not being extracted by the Network Access Providers. Similarly, there are segments of the population that would buy Internet access, even at a degraded quality, but do not think the service is worth $19.95. In an industry with very low marginal costs (close to zero), users with a high willingness to pay could subsidize the cost of the network infrastructure, allowing users with lower income or with a low willingness to pay to receive the service at close to marginal cost.
In the United States, the telephone industry has an elaborate differential pricing mechanism that includes fixed charges, usage charges and congestion charges. Users in highly populated areas subsidize users in rural areas. Note however that this system does not (yet) work in a perfectly competitive and deregulated market. Therefore, customers in populated areas are not being charged their entire surplus. A similar situation currently exists in the Internet: ISPs are being artificially subsidized by the telephone companies, which are not being allowed to charge them congestion fees.
In Part I, a Network Access Provider (NP) was defined as a company that provides network connections to Users and Services. NPs should not be confused with Infrastructure companies, which provide the physical network structure to non-end users (i.e. NPs). Figure 2 below shows a possible connection scheme between a User, Services (in this case a stock quote server and a news server), a NP and the Infrastructure.
Figure 2: Typical connection relationship between the four different network entities in our model, as in the case of Internet. |
NPs can be thought of as providing access to end-users. Often, a company that we classify as a NP can also qualify as a Service (as we define it). America Online, for example, provides Internet access as well as on-line services such as chat sessions and bulletin boards. In this section, however, we restrict our pricing discussion to the Internet access aspect of being a NP.
In the model (see Figure 1), four different relationships involving interactions with different network entities can be seen:
The first three relationships will be discussed in the following subsections. The last relationship will be discussed in Part IV.
This section first briefly describes the costs incurred to a NP. Then we will list some user pricing schemes for recovering those costs and, in some cases, providing a profit. Lastly we will describe a typical NP's pricing strategy as an example of how to build up a pricing strategy based on these different pricing schemes at the end of this section. Other examples of pricing strategies will be discussed in the AOL case study.
Although it is true that many infrastructure and other network technology companies (see Part IV) enjoy economies of scale that result in a high sunken cost and low marginal cost, NPs suffer from diseconomies of scale when dealing with users. Customer support, accounting, billing and hardware maintenance all increase disproportionately with the number of users [12]. Furthermore, anything that inconveniences the user will not be tolerated [13]. Pricing, then, must recover the fixed and growing marginal cost but not inconvenience the users. Lastly, a pricing scheme should also provide incentives for both the Network Access Provider and the Users to act in a socially responsible way. For example, the scheme should encourage the NP to invest in more capacity when necessary and still provide the user with a disincentive for using Internet services in a wasteful manner. We will first discuss the costs that NPs incur.
Hardware and software. An NP must recover the costs of hardware, software and customer support. The hardware and software costs will vary depending upon the type of access the NP will support (most support also depends upon the customer's preference). Customers can choose between dialup or leased line access. Dialup service requires that the NP purchase a terminal server, modem pool and dial-up lines. The software support costs of providing dialup service are negligible. Occasionally, the hardware must be upgraded. These upgrade costs tend to be "lumpy" in that they are incurred in large lumps rather than incrementally over time. NPs providing leased line access are required to provide a router at either end of the leased line (one at the NP site and one at the customer site), but terminal servers and modems are not necessary [17]. The software required for leased line service is more complicated than that required for dialup service as configuration for the former case may take considerably more time.
Customer support. Customer support costs can be categorized into three support types that occur over the life of the NP/customer relationship: costs of acquiring a customer, costs for supporting an ongoing customer, and costs of terminating a customer relationship. Acquiring a customer involves not only the marketing costs to attract the customer but may also require, for example, a credit check, on-site consultation and custom configuration. Ongoing customers may require occasional upgrades and ongoing network maintenance. At termination, the NP must settle accounts and reconfigure the hardware.
Based on some of the background discussions in Part II, we will now consider the positive and negative aspects of the pricing models in the context of NPs. In doing so, the complicated issues in pricing will be revealed.
The case for public subsidy. Before considering any one pricing schemes, it is useful to ask, "Is it technically, economically and socially feasible to charge for Internet service at all?" Some believe that the answer is "no". Some units of pricing, such as number of packets (units of communication) sent, require more computing resources to do the packet accounting than to send the packet, thus rendering those pricing schemes infeasible. As far as the economic and social feasibility is concerned, there is a very strong argument that the Internet access market cannot succeed and, therefore, the prices charged will neither be economically nor socially optimal. From the trend of NP insecurity and price flexibility, one can conclude that the Internet access market is currently competitive. There is a strong belief, however, that a market "shakeout" will occur from which only a few NPs will survive. If this is the case, then those firms will be able to charge prices much higher than at the marginal cost. Market failure is said to occur at that point - "when the market is incapable of producing an economically efficient and socially optimal allocation of resources" [15].
When a market fails, economic theory says that government intervention is required at that point, especially for quasi-public good such as the Internet. Government regulation, however, is out of our scope. From now on, we will continue with the pricing analysis ignoring the possibility of Internet access market failure.
Flat. Some economists support the argument that a flat price for Internet access, regardless of the amount of resource use, is the only feasible pricing scheme [18]. The argument bases on the fact that we are building a general purpose network where the uses will be very diverse, and therefore the resulting dynamic allocation of resources (usually bandwidth) will become increasingly difficult to meter and expensive to track. This argument is most compelling, however, at the Infrastructure level in regard to charging NPs for resources. It is not as compelling at the NP level, where the amount of resources used is more easily monitored as there is a known and finite set of destinations (the customers) that need to be tracked. Lastly, flat rate pricing does not provide the user with any incentive of not causing congestion.
Usage based. Recall from Part II that usage based charges are determined by the quantity of use and can theoretically be measured in a number of different ways: speed of the connection (i.e. the modem speed), connection time, number of packets sent, length of the connection to the NP in minutes, and so on. Pricing based on the number of packets actually sent has an advantage in that it is fair in the sense that the users are charged for exactly what they use. Pricing based on number of minutes of the connection is unfair, however, because it does not distinguish between the length of the connection from the number of packets actually downloaded, although there may not be any correlation between the two. It is certainly feasible, for example, that a user reads information downloaded at the beginning of a session for an hour; another user could download a new page of information every five minutes.
Usage based pricing does provide a disincentive for users to be wasteful of network resources since they must pay for the resources they use. In practice, however, setting rates and measuring the usage is very difficult. It could take more computing power to compute the resources used by sending a packet than to actually send the packet. Therefore, usage pricing based on number of packets is economically infeasible. When other accounting measures such as connection speed or connection time are used, however, users will complain these are unfair because people (with the same connection speed / connection time) receiving and sending different amounts of traffic would be charged the same. Also, there could be a lot of "idle" times in which no network traffic is done but the connection is still maintained. Finally, usage based pricing is very controversial because it endangers the vitality of the Internet. Users would undoubtedly not "surf" the web as freely when there is a virtual meter ticking in the background. Where usage based pricing has been tried, the growth has slowed down [14].
So can a compromise be made between the simple, convenient flat-rate pricing and the meter-ticking, unfriendly but economically more "efficient" usage based pricing?
Priority. In a priority scheme (also sometimes called Quality of Service scheme), the users chooses the quality of service that they want and pay a flat fee for that quality of service [14]. A user could choose between high or low priority connections, for example. Another example of priority pricing is to allow the user to actually choose the priority of their packets (both sending and receiving) in the Internet. This latter type of priority pricing is not currently available because the underlying infrastructure does not differentiate between different packets' priorities. However, this type of pricing might provide better quality of service than a faster line because although the faster line could provide better service at the endpoint of the user's connection, it does not provide the end-to-end guarantee that packet priorities would.
The idea behind priority pricing is that the user pays for what they get but does not have to deal with that "ticking meter" feeling. Priority charges also have the advantage that they allow the NPs to charge for "luxury items" and, therefore, attempt to charge a price closer to the user's willingness to pay. However, priority based schemes may not provide enough granularity to allow NPs to charge at the highest level possible for each customer.
Tiered usage. In a tiered usage pricing scheme, the user is charged a certain amount for the first X units of use, then a higher amount for the next Y units of use, etc. The advantage of tiered pricing is that it might allow whimsical browsing without encouraging excessive use. The disadvantage is that the user would be inconvenienced by having to keep track of their usage. As we stated above, user inconvenience is not acceptable. This could be remedied in a number of ways, however: perhaps by sending a message to the user once they've crossed the threshold of a new tier or allowing the user to access to their account records thus far.
Congestion. One reason to introduce pricing schemes into the Internet is to make users understand the value of what they are gaining (an ability to communicate and to access information) and to give them an incentive to act in a socially conscious way which reduces the harm to others [16]. For example, everyone is accustomed to higher daytime rates for long-distance telephone service. The rates are higher during the day because phone lines are congested during that time. Higher prices serve to inform the customer of the extra value of calling during periods of congestion. The customer, then, will meter their daytime use according to their willingness-to-pay for that telephone call: if the call is relatively urgent, they will phone during the daytime; if not, they will wait until the evening. In the Internet, we can do something similar by charging according to the state of congestion of the network.
The drawback of a congestion-pricing scheme is that it provides an incentive for the NP to cause congestion by restricting its capacity (which would be analogous to a monopolist choosing to produce a small supply of product). Figure 3 illustrates that the NP will constrain the supply up to the point that the gain is equal to the loss. There are several ways in which congestion can be spuriously introduced. For example, an NP can:
|
This is certainly undesirable because the network is not used efficiently. Either the full capacity is not employed, or some capacity is wasted just for sake of "creating congestion". A way to get around this problem is to introduce a two-part tariff mentioned in Part II.
Two-part tariff. A two-part tariff is comprised of a fixed (f) portion and a variable (v) portion. The fixed portion includes charges for network access and capacity (a capacity charge is based on the network's maximum possible bandwidth). This is determined by the fixed costs, the willingness to pay of the customer population, and the size of the population. The variable portion would be based on the actual usage of the users and the priority of their service. The fact that the variable portion extracts the consumer surplus means that the two-part tariff scheme maximizes the consumer surplus extracted from customers, and therefore provides the NP with a disincentive to induce congestion, which would reduce the number of connections and the network usage.
One should naturally ask if it is even possible to capture the consumer surplus in a perfectly competitive environment. The answer is "no", but perfect competition may not be present in the future Internet economy due to the "shakeout" mentioned above. In that case, it would be possible to capture consumer surplus.
Smart Market proposal. The Smart Market proposal [7] provides an intelligent way to price the variable portion (v) of the two-part tariff mentioned above based on network congestion. In an ideal world, the price charged for network use would be a continuous function of the congestion. The price charged to the user would be determined by the congestion level at the time the packet was transmitted. However, this would be inconvenient for the user and the NP as the NP would constantly have to monitor congestion and the user would have to constantly monitor the price to determine if the price has surpassed the user's willingness to pay.
The Smart-Market proposal suggests that users specify a bid for each packet sent. That bid should reflect the user's willingness to pay. In times of congestion, packets are prioritized according to their bids. Packets are charged at the bid of the highest priority packet that is dropped, not the bid on each packet. This provides an incentive for the users to bid based on their true willingness to pay.
Selling advertising and marketing information. Perhaps it is not necessary for the user to pay at all. Rather, the NP could bring in profits just by selling advertising space. In fact, a Berkeley Internet search engine company makes its profits not from the users that use the search engine, but from selling advertising space to big companies (Novell, Visa) and from selling the marketing information about the web found by their "web crawlers" (the programs that find documents to search). NPs are often in the Service industry as well and, as such, might set up charge accounts for their customers. By gathering the personal information (such as taste) of their customers, they can sell this kind information to other companies. Although people tend to regard their privacy as sacred, they are surprisingly willing to give up that privacy for a very small amount [12]. Selling advertising space or username lists are alternatives for all NPs, not just for those that provides information services.
In practice, many of the above pricing schemes are combined to form a successful pricing strategy. The Omaha based NP, Mitec [40], has a variety of pricing options that target personal, business and corporate users. They offer Web development solutions and a variety of different access speeds.
For users seeking "personal solutions," for example, Mitec offers them an option of a flat-fee account with unlimited access for $19.95 (flat-fees also help in gaining market share), or a tiered account with a flat-fee of $9.95 for the first 20 hours and a $1 additional charge for each hour thereafter. For the family on the Internet, Mitec offers unlimited access for $24.95 with five separate email accounts. Mitec also offers something we have not yet discussed: increasing discounts for 3 month, 6 month and 1 year commitments. (This kind of discounting strategy will be discussed in the AOL case study later in our paper.)
For the tight budget, Mitec offers an "Email Only Account" which provides the user with an email account but no web access.
Business solutions are offered similarly to the personal solutions. In addition, Mitec offers a business their own name domain (http://my_company.com) for $39.95 (the company must also pay the $100 dollar fee for the domain registration). A real bargain for the small Internet company aspiring to look big! This solution comes with varying disk space options (for web page storage), varying data transfer amounts per month and varying numbers of email accounts. These accounts come with T1 service.
As an example of priority pricing, three different line rates are sold separately for those who want their own Internet access. Prices start at $89.95 per month for 33.6k to $200 per month for 128 Kbps ISDN.
In this section we consider three issues regarding the interactions between NPs and Services. First, pricing schemes appropriate for Services are discussed. Then we will look at the shifting of the liability of the cost of communication to those who provides the Services. Lastly, we discuss the emerging "push" technology and its impact on pricing.
Many Services need access to the Internet before being able to market their goods on the Information Superhighway. In this capacity, the Services are much like the Users above in that they need to purchase Internet access. Hence, the pricing schemes for Users listed in Part III-A can also be targeted to Services in their capacity as network user. The "advertising alternative" to pricing mentioned above would not be applicable, however, since the Services are the targets of that cost recovery model rather than the benefactors.
More often than not, we as consumers must pay a sales tax on purchased items. Although the sales tax goes to some government entity, we do not write a check to the government but instead give the tax money to the seller, who later transfers the money to the government. The buyers greatly outnumber the sellers and the sellers are less mobile than the consumers, so the sellers are liable for the taxes rather than the consumers. Holding the sellers liable is a more efficient method because it minimizes the net cost of accounting and collection for the taxes.
If you map the above model into the context of the Internet, sales taxes are analogous to communication costs, consumers are analogous to Users, the government is analogous to NPs, and sellers are analogous to the Services. It is unclear which entity, Users or Services, should be liable for the communication charges. For example, if a user pays for some Service's software, who pays for the communication cost of downloading the software to the user (we are assuming in this example that downloading is the method of delivery for the software)?
As in the tax collection case above, it might be more efficient to have the NP collect charges from the Service [16]. This would certainly be the case if the User and the NP did not have an existing relationship. However, the User and the NP do have a pre-existing relationship where the User pays the NP for network access. Therefore, the accounting and collection methods are already in place at the NP/User level. It would seem, then, that there is no benefit from imposing the liability for communication costs onto the Services.
However, from Part III-A, we know that charging for actual usage is difficult from a practical standpoint because of the processing power that would be necessary to measure the usage. Imposing the liability for the cost of communication onto the Service would greatly simplify the accounting procedure for usage-based accounting: the server knows a priori exactly how much bandwidth is necessary to transmit each product - we can call this the shipping cost - and would simply need to add the cost to the customer's bill. Although the Service would have to initially measure the cost before selling the product, this is a one-time calculation. Further, because they know the shipping cost beforehand they could simply include a line for shipping cost in the User's bill for the software product. This imposes no more inconvenience on the User than the standard mail-order purchase common today.
Currently, the one-button-download user interface is extraordinarily popular: the User clicks on one button and "pulls" the information from the Service to their local site. This model is referred to as pulling. Pulling does not work very well, however, when the user is interested in time-sensitive data: stock quotes, weather reports, etc. This type of information is best disseminated using a push technology in which the producer of the information pushes the information to the interested users when it changes. Push technology is also ideal for sending out news updates that the user has pre-registered interest in. A stockholder in company X may employ a news service to send any news articles about that company when they are released. More generally, perhaps a user registers interest in any stock-market news article. In this last case, not all of the pushed articles will be read by the user.
Which technology is used to disseminate the information, pushing or pulling, can have an impact on who pays for the transport cost. Pulling seems to imply that the user should be charged as they have specifically asked for that particular information good. The click of the "download" button can be considered the consent to buy. Who pays transport costs for Push technology is not quite as clear-cut. If the user does not read half of what is downloaded, should they pay for that information?
Because NPs tend to agree that providing users with full Internet connectivity is a basic requirement, interconnection settlements between NPs covering the case when two users with different NPs are communicating are not necessary [17]. The rationale is that when NP1- user communicates with NP2-user, both NPs get paid by their respective customers so no settlement is necessary. The present practice is to sign either a multi-lateral which allow all foreign traffic to be accepted by an NP, or several bi-lateral agreements which are agreements between two specific NPs. Currently, 70% of the NPs sign multi-lateral agreements; the remaining 30% sign bi-lateral agreements [24]. The rational above does not apply to the case of transit traffic, (or more commonly referred as "pass through" traffic), however. These types of settlement issues will be covered in Part IV .
Although there does not seem to be a need for settlements among NPs at present, future development of the network may lead us to face new issues in this aspect. An example of it is illustrated in the Network Computer case study below.
As mentioned previously in Part III, the Infrastructure is defined as the physical network which essentially provides the "highway" for network traffic (including voice, data, video, etc.). It can be modeled as a "web" made up of links and nodes. A link is an abstraction of copper wires, optic fibers, wireless channels of communication, etc. A node is a point where two or more links connect and may be represented by a network router, a telephone switch board, a radio relay station, etc. As expected, the building, maintaining and upgrading the infrastructure requires enormous investment, and thus it is usually controlled by a monopoly or an oligopoly, including government agencies.
In reference to the model presented in Part I (Figure 1), this section will analyze two major interactions which involve infrastructure providers, namely:
In the telephone and Cable TV industry, the roles of Infrastructure and Network Access Providers are played by the same company. For example, AT&T owns its telephone lines, and at the same time it provides telephone network access to the customers. However, in the context of Internet, these roles are often taken up by different companies. There are large ISPs (such as AOL) who own their own infrastructure (referred as subnet later in this section) and provide Internet access, but there are also countless small ISPs who only act as a Network Access Provider. Their interactions become most controversial as the Internet continues to grow.
We will first describe the costs incurred for creating the physical infrastructure and the costs that infrastructure providers bear to resolve problems resulting from unexpected, heavy local telephone network usage. In addition, we will discuss the problems with the current pricing strategies and bring out some of the unresolved settlement issues. Finally, we will analyze different proposals for pricing the infrastructure.
Obviously, the major network construction costs are buying and installing the links and nodes. Currently, most long haul infrastructure providers use optical fibers for their transmission links. The costs of constructing the fiber optic links include the cost of the fibers, of trenching and of labor installation. Since the cost of the fiber is relatively small compared to the total cost of installation, excess fiber is typically installed. Between 40% and 50% of the fiber installed by the typical interexchange carriers is "dark", i.e. the lasers and electronics required for transmission are not in place. Private lines can be provided out of this surplus capacity. The costs for connecting a private line include lighting up the fiber with lasers and electronics (if it is originally "dark") and customer acquisition.
Although the sunken cost of network construction is substantial, once the physical infrastructure is established, the incremental cost of carrying packets is negligible. However, maintenance and upgrade costs have become a nightmare recently. The heavy telephone usage at the local loops by Internet users has imposed big problems for the telephone companies. In order to accommodate the ever-increasing network traffic, larger and faster switches are constantly replacing the old ones. This cost has been huge [19] but the telephone companies are not getting any compensation for carrying the extra Internet traffic. We will look into this issue more closely later in this section.
The rationale of pricing the infrastructure is to recover the costs for the providers and control network usage. Although there are dark fibers and upcoming new technologies (such as xDSL and ATM) to increase network capacity, in short term and on a regional scale, congestion is a big and very real problem. It takes only 100 simultaneous video conferencing sessions to jam MAE-EAST (a large network switch maintained by MAE). It is not prudent to rely on the belief that the capacity can be increased indefinitely in the long run, from both the technological and economical points of view. The key therefore is to build enough infrastructure to satisfy a statistical demand, and use economic methods to manage the actual demand.
Current pricing schemes [17]
In order to recover the large sunken costs of the physical infrastructure, providers must charge high up front fees. These large installment and even termination fees provide customer lock-in. (Note that the customers of Infrastructure are Network Access Providers in our model.) Infrastructure providers also have included volume discounts and term commitments in their pricing structures, in an effort to provide customers with cost-incentives and increase their loyalty.
Standard interLATA private line charges consists of a one-time access rate and a monthly charge based on the airline mileage between the two locations to be connected. There is not, however, any usage-based pricing in this pricing model.
AT&T offers for its Accunet 1.5, T1 lines, a 57% discount if monthly bills exceed $1 million for a five year contract. This pricing methodology seeks to encourage large firms to join, which could provide cost savings for infrastructure providers who prefer to sell their services to one customer rather than to 1,000 customers, making monthly bills of $1,000 each. High fixed costs and long term contracts also encourage ISPs to be loyal, and also ensure guaranteed revenue for infrastructure providers.
Other pricing structures include a usage-based tariff in the form of monthly tariff rates per circuit. ISPs which purchase primary rate ISDN lines, business dial tone lines, and CENTREX and CustoFLEX facilities to access the local telephone network, are charged a monthly business fee plus a usage charge for outgoing calls[19]. However, there are loop holes in the infrastructure pricing system; and this has resulted in losses for both infrastructure providers and related parties [19].
First of all, the incoming calls from the users to the ISPs are not charged. When the users dial-in to their local ISPs for network access, the ISPs are not subject to the business usage telephone rate. (See Figure 4) For the users, there is virtually nothing preventing them from clogging up the network. (The "congestion" concerned here is mainly at the switches of the central offices of the telephone companies.) As more users are heavily congesting the network after business hours, new facilities must be created to maintain the quality of telephone service. Monthly recurring fees charged by local exchange carriers to their customers are not sufficient to recover these added costs; therefore we have to search for solutions so that infrastructure providers will be properly incentivized to further invest in the development and upgrade of the network.
|
As we mentioned above, although the telephone companies have to carry the extra Internet traffic between the users and the ISPs when the users dial-in access to the network, they are not getting any compensation for this. The Enhanced Service Provider (ESP) exemption of the FCC allows ISPs to obtain their access services from local service tariffs. For about $17 per month, an ISP can utilize lines from the local public switched network that can be literally filled to capacity[20]. This practice not only increases congestion at local telephone networks, but also increases the costs on the infrastructure providers, while at the same time keeps the infrastructure providers from making further investments to the infrastructure (such as building broadband access network to the home). When the users enjoy connecting to the network without extra charge (at a relatively low telephone flat-fee), it is unlikely for them to choose to pay more for another means of access, even if it is faster and better. In fact, it is not uncommon for some users to acquire a second telephone line just for Internet access. As a consequence, the infrastructure providers do not see an incentive to build a broadband access network to the home, and the future advancement of the network is hindered.
An example may better serve the purpose of illustrating the problem of dial-in access. From a comprehensive study by Bell Atlantic [19], added costs to bandage the problems created by heavy local telephone usage by Internet users actually present negative revenues for Bell Atlantic. Sometimes ISPs were using all of available time slots which in turn blocked the switching access for residential and business customers. Over $2 million of switching equipment including labor costs were spent to remedy this problem alone. Some ISPs were also found to operate very close to the maximum line-usage rate, for long periods of time. This required the installation of new lines, equipment transfers, and over 300 interoffice trunks. Moreover, heavy traffic loads dramatically shifted from 3:30-4:30pm to 8-9pm. This also created additional costs for rerouting trunks and reconfiguring the network lines of a central office. Bell Atlantic estimated the revenue from these sites is $8 million. However, the overall costs incurred upon Bell Atlantic are estimated to be $30 million. Bell Atlantic therefore will suffer a net loss of $22 million. In a five year period, assuming a 40% annual growth rate (this is a very conservative estimate), extra costs of $120 million could be generated but will not be covered by Internet users.
In order to ease the problem, settlements on top of the current flat-rate pricing between NPs and infrastructure providers have to be introduced and implemented properly. While the FCC should take the first step to cancel the ESP Exemption, NPs and infrastructure providers should find an accounting method that is mutually acceptable to both parties. This could be done by a lump sum each month, either base on estimated traffic or a sample of usage. Otherwise, another simple method is to count the duration of the connections (of the end users), which will probably incur the least overhead for the network. Settlements based on number and the types of packets sent are arguably more fair, but this is not supported by the current technology. Even if the technology allows this to be done, the overhead generated may be too high to justify. In any case, we see an urgent need for setting up a proper settlement model.
In the long run, we see the possibility of implementing new pricing schemes to alleviate the growing congestion, although at present it is clear that the overhead cost is too high. The new pricing schemes, if implemented successfully, will be an effective alternative of doing settlements on top on flat-rate pricing.
Usage-based. Considerations for the implementation of usage-based pricing at the NP level also apply to the infrastructure. In addition, a usage-based pricing for the infrastructure can provide extra revenue for the development of more efficient and increased capacity networks. Provided that an environment is available which makes the adoption of a usage-based pricing attainable, charges based on the volume of traffic is a relatively simple and cost-effective scheme.
Costs for providing this service include accounting hardware, software, and a business unit to bill the users. The New Zealand Internet experience provides a good example of a usage-based pricing system that was successful. Moreover, in New Zealand, there are virtually no congestion problems and neither do they foresee any problems in the future [32].
However, one of the major driving forces that encouraged the implementation of usage-based pricing in New Zealand was that their customers wanted that service. In an environment that is hooked on flat-rates, such as in the United States, attractive features of usage-based pricing must exist before the customers (NPs) will accept the switch.
Priority. A more sophisticated pricing scheme suggests charging customers based not only on how many packets are sent out, but also on price differentiation of packets. This is suggestive of the notion of quality of service.
Packets must have different classes. The classification should have enough different classes to let the economics work, i.e., A) to alleviate congestion through means of pricing, B) to reflect to a certain extent the network resources required by these packets.
One must be cautious, however, in implementing an application or content aware network which charges based on the types of information transmitted. Unforeseen liability, gateway, and clutter effects [21] may arise to make such an pricing system cost ineffective, and therefore could gain less acceptance.
Current interconnection agreements
As mentioned above, the Infrastructure is made up of links and nodes. The network has its value when different parts of the Infrastructure (owned by different Infrastructure providers, hereafter referred as subnets) are interconnected to facilitate information exchange. First we will talk about how they are connected and what the connection agreements are. Currently there are four major Network Access Points (NAPs) sponsored by NSF[23]. The NAPs are "large" exchange points (nodes) for Internet traffic. Subnets connect their networks to the NAPs for the purpose of exchanging traffic with others. There are also exchange points initially dedicated to commercial service. They are owned by Commercial Internet Exchange (CIX), which was formed in 1991.
The current interconnection agreements are quite straightforward. For the case of NAPs, subnets pay a flat fee for connection according to the line speed and then sign either a multi-lateral peering agreement (MLPA) or bilateral peering agreements (BLPA) for interconnection. For the case of CIX, members pay an annual membership fee for connection, and by joining the membership they agree to exchange traffic without regard to type (commercial or R&E) [17]. In both cases, no extra settlement is done.
Settlements on pass-through traffic
What remains unclear is the issue of "pass-through" traffic between infrastructure providers. A "pass-through" is the traffic originated at subnet A, intended for subnet B, and yet somehow has to go through subnet C. (Such a scenario happens when, for example, subnet C connects to NAP 1 and NAP 2, while subnet A and B only connect to NAP 1 and NAP 2 respectively.) The traffic brings certain benefit for both A and B but not C. In the current Internet, C either passes on the traffic with goodwill (if it signed the MLPA or the BLPAs with A and B), or rejects it (if it did not signed the MLPA and the BLPAs with A or B) and thus affect the whole network negatively. If C passes on the traffic, significant added costs to handle the extra traffic may be incurred, without any reimbursement. On the other hand, if it refuses to pass the traffic, the packets are routed in a less efficient way because a subnet that allows them to pass through may be already congested. So, what should, if any, be done on this regard?
First of all, we should identify that the adverse effect of no settlement on pass-through traffic is not as serious as that of the dial-in access. Currently only 30% of the subnets do not sign a MLPA[24], so we can speculate that not many of the subnets do really regard pass-through traffic as a big issue. After all, it is not hard to realize that the "net" pass-through traffic for each subnet will approximately be zero, provided that they have approximately the same size. The complaints at this point stems from the large subnets which carries and "distributes" traffic for small subnets, because the chances that the small subnets carry traffic for the big ones are slim. However, it is foreseeable that in an industry equilibrium, communication infrastructure will be controlled by oligopoly [22] within each nation's boundary. Eventually only a few big players will remain and the settlement problem is greatly simplified.
End-users enjoy flat rates, and ISPs virtually do not have to pay for solving problems resulting from heavy Internet congestion created on the local telephone networks. As a result of this abusive usage and inability to effectively recover sunken and ongoing costs, infrastructure providers have been inhibited and frustrated from investing in a better infrastructure to provide broadband access.
One of the solutions to change the current practice of ISPs is to implement a better pricing system [19]. This system, such as usage-based pricing or quality of service, should create positive revenues for infrastructure providers and decrease congestion. This, in turn, will not only provide incentives for infrastructure providers to increase line speed, but also should encourage them to invest in more cost-effective and efficient infrastructure which do not heavily rely on local telephone lines.
We foresee that the two major driving forces for creating a flat-rate with usage-based pricing system are: (1) the formation of an oligopoly and (2) users beginning to demand this accounting system. As end-users begin to sense the real impact of congestion, requests for some form of prioritization will arise. This will provide a basis to implement a pricing based on quality of service. Moreover, an oligopoly will provide a less competitive market which will allow infrastructure providers to charge a non-zero usage fee. Only then, can we begin to realize real incentives for infrastructures to increase and develop new infrastructure.
The only entity that can immediately encourage the development of new infrastructure is the government. Currently, the NFS is providing subsidies for the creation of a very high speed backbone (vBNS) which will include prioritizations for 1st- class and 3rd-class services [25]. If successful, this project will provide a good footing for the realization of a more complete usage-based pricing system.
In this part we will look closely into some specific case studies. They are chosen in such a way that the concepts and issues we raised in earlier parts are illustrated and addressed in real-life situations. The first one is AOL Pricing History, in which we focus on the aspect of NP pricing users. The second one is New Zealand and Chilean Internet Experience, where we address the issues of pricing at both NP and infrastructure level. The third and the fourth case study deals with Internet Telephony and Network Computer respectively. Since they both touch a wide range of issues, we will discuss all the costs, pricing and settlements involved.
America Online (AOL) is a proprietary network that provides online services to consumers, including electronic mail, conferencing, news, sports, weather, stock quotes, software, computing support online classes, Internet access and a broad array of informative content. It develops and markets interactive services for businesses including the design, development and operation of wide area networks. In the context of our model, AOL is at the same time a service, a network access provider and a part of the infrastructure. AOL possesses its own proprietary infrastructure, called AOLnet, which carries most of AOL's network traffic.
Since its creation, AOL emphasized the service part of its business. It developed its own content and network access technology, and user connectivity was limited within the AOL network - there was no interconnection with other networks. As the Internet became more popular, AOL users began to demand Internet connectivity. AOL was forced to interconnect with the Internet gradually, first by providing Internet email, and later on World Wide Web browsing. But this interconnection to the Internet threatened AOL's service technology and its role as a service provider. AOL was forced to shift its focus from a service provider to a network access provider. As competition from ISPs increased, AOL became a major ISP itself. The Internet network externalities and its path effects made AOL network access and content technologies obsolete, as users and content providers favored TCP/IP, HTML and other established Internet technologies other than the AOL proprietary ones.
AOL is an interesting example of how network externalities forced an online service to interconnect, to adopt new technologies, to change its business focus and to repeatedly modify its pricing structure. By looking at AOL pricing plans through its history we can see how online services have reacted to the forceful adoption of the Internet as a universal standard. We can also see how competitive forces have driven most of these services to adopt a flat rate pricing scheme and to look for alternative sources of revenue.
Until December 1994
From its inception, AOL used a two-part tariff scheme, with a monthly access charge of $9.95 and a usage charge of zero for up to five hours, and $3.50 per hour thereafter. As we have seen in the economic background section of this paper, a two-part tariff like this is desirable because it is simple and because it extracts much of the consumer surplus. AOL was a self-contained network, and users had a high willingness to pay for the unique services it offered.
At that time AOL had no interconnection with the Internet, which was still unknown to most users, nor with other online services. This lack of interconnection and limited competition allowed the other online services, such as CompuServe and Prodigy, to use similar two-part tariff schemes.
AOL had its own proprietary network access and content technologies. Similarly, other online services had developed their own proprietary technologies. Because the content technologies of each online service were different, independent content providers usually were forced to provide their content through only one of these online services, thus limiting their audience to the users of that service.
January 1995 to July 1996
The $9.95 access charge was retained, but the usage charge after five hours dropped to $2.95. At that time, the Internet was becoming more popular and the number of ISPs was increasing rapidly. However, the $19.95 flat rate for Internet access had not become universal yet. The increased competition from ISPs, the popularization of the Internet, and the imminent introduction of the Microsoft Network gave users many more options. The increased demand naturally brought down customer's willingness to pay, and forced AOL to drop its hourly rate and to interconnect with the Internet. The other online services were subject to the same pressures and also interconnected with the Internet, and therefore with AOL.
Network externalities became more important during this period. The popularization of the Internet and the proprietary online services meant that more people were connected to a network. Since people were subscribed to different networks, they demanded interconnection to the Internet, and this forced online services to provide Internet e-mail, and later on WWW browsing.
Companies realized that they could reach a large number of people in an economic manner by developing WWW sites. Using the Internet technology was simpler and more economical than contracting and using the proprietary technology of an online service like AOL. As more WWW sites came into being, the network externalities became more powerful, and the number of WWW sites exploded. Both network externalities and economics were making AOL proprietary technologies obsolete.
July 1996 to December 1996
In order to retain customers while still extracting as much consumer surplus as possible, AOL introduced second-degree price discrimination. The existing plan was retained as the "standard" plan, with a monthly access charge of $9.95 and a usage charge of $2.95 after five hours of use. Additionally, a new "Value" plan was introduced, with a monthly access charge of $19.95 and a usage of charge of zero for up to 20 hours, and $2.95 per hour thereafter.
With the two pricing plans, AOL tried to target two distinct groups of customers:
In October 1996, AOL introduced an Internet service, Global Network Navigator (GNN) targeting users desiring a full-featured Internet-based service. It charged a monthly access fee of $14.95, with a usage fee of zero for up to 20 hours of use, and $1.95 per hour thereafter.
Since December 1996
AOL made further refinements in its second-degree price discrimination structure. It included the following options:
The adoption of the $19.95 flat fee is important because it signaled the absorption of AOL into the Internet. AOL had been transformed from a service company, whose main product was its content and in which the network was just a necessary means to access the service, to a network company, whose main product was network access. The multitude of pricing schemes may indicate signs of desperation and loss of focus, since apparently AOL was trying to match all existing pricing schemes from competitive networks (ISP's, MSN), and had made AOL disregard what had been one of its main core competencies: its content.
A flat rate pricing scheme does not extract as much consumer surplus as a multiple-part tariff scheme does. In fact, a flat rate pricing scheme may barely cover the services' huge fixed costs. Therefore AOL's new emphasis is on expanding its customer base and on developing alternative sources of income. Given the knowledge that an ISP like AOL has about its customers (e.g. address, online navigation habits), advertising and sales are obvious choices for alternative sources of income.
However, the aggressive acquisition methods that AOL has used have had major economic consequences - acquisition costs are from $50 to $300 per new user (depending on the sources), and churn rates are very high. Acquisition costs are deferred over several months, so the actual profitability of the company may not be what is indicated by its financial statements [27].
The flat rate pricing scheme, together with the aggressive acquisition campaign, attracted a huge number of customers, who remained connected for extended periods of time. As a result, AOL's infrastructure became congested - users had a very hard time accessing the system, and when they were successful, the system was painfully slow. AOL miscalculated the impact of the introduction of a flat rate, and as a result it alienated thousands of customers and faced many lawsuits. Since one of the main features that differentiated AOL from other ISPs was the ease of installation and connection, this lack of sufficient infrastructure put AOL in a very dangerous position. AOL reacted by investing millions of dollars in additional infrastructure.
America Online (and other online services) initially positioned itself as a service provider, and limited access to its services to users of its proprietary network. It did not license its content technologies, so they remained proprietary and incompatible with those of the competition. When an alternative technology (WWW) emerged in the public domain, people had a big economic incentive to use the open technology. As happens many times, when the company took notice of the new technology, there was already a critical mass of people who had adopted the new technology. So AOL had to abandon its proprietary technology in favor of the open one. This reminds us of the Beta vs. VHS standard case.
A flat rate scheme encourages network congestion, because users are not conscious of the resources that they are consuming and the cost of those resources. As a result, the quality of the service provided by the network is degraded. Investing more in infrastructure may alleviate the problem somehow, but only temporarily. Furthermore, eventually companies may stop further investments in infrastructure that the flat rate will not be able to recover.
Multiple-part tariff schemes such as the access+usage scheme used originally by AOL and other online services are easy to implement under monopolistic conditions. However, under intense competition, services seem to gravitate toward flat-rate schemes. Part of this phenomenon may be due to the characteristics of the TCP/IP protocols, which were designed when the Internet was a subsidized, not-for-profit network. New protocols that allow the implementation of different types of services, such as those based on quality or congestion may allow services to implement differential pricing strategies. Meanwhile, services may be forced to subsidize their flat-rate pricing plans through other means of revenue, such as selling marketing information or advertising.
The development of the New Zealand network (NZGate) began in 1990 when six New Zealand universities and NASA established a 9600 bps analog cable link from New Zealand to Hawaii. In April 1991, the network expanded to link all of the seven New Zealand universities to form the Kawaihiko network. Later, the Tuia network was established. It linked Kawaihiko to two pre-existing government managed networks - the Department of Scientific and Industrial Research (DSIR) and Ministry of Agriculture and Fisheries (MAF) - on an informal basis.
In July 1992, the Tuia Society was created, which consisted of three major management groups, i.e. Kawaihiko representing the universities; Industrial Research Limited (IRL) which was the old DSIR; and AgResearch which was the old MAF. Two smaller groups, the National Library and Ministry of Research, Science and Technology (MoRST), also joined the Tuia Society. At that time, a Frame Relay backbone was also set up to provide connectivity between the groups. The Frame Relay backbone was provided by a private organization, Netway Communications, which was a subsidiary of Telecom New Zealand. Figure 5 and Figure 6 summarize the interconnections and the configuration of the management groups and sites within the Tuia Society and Kawaihiko up to 1992, respectively.
|
|
In 1991, a large government-funded project was proposed in Chile to create a national TCP/IP backbone that would link all national universities and provide a single international link to the Internet. The project was entitled REUNA. Government support, however, would cover only costs for the initial set-up of the Internet. Therefore, continued operation and development costs would have to be shared among the member institutions. Unfortunately, as a result of disagreements between members regarding the distribution of costs and the control of the network, a few universities left REUNA to create their own national network, named Unired. Both organizations quickly created their independent national networks and by 1992, two international links were established separately linking REUNA and Unired to the Internet (Figure 7).
|
It is important to note that communication between members on different infrastructures within Chile (i.e. REUNA and Unired) was difficult. The traffic had to travel through the international link since there is virtually no connection between REUNA and Unired.
New Zealand
The general principles followed by the New Zealand institutions for the establishment, maintenance, and development of their network were: (1) initially share the traffic costs and if possible, have each site pay for their own access costs and (2) once a proper accounting system was established, "pay for what you use" (both access and traffic costs).
For the initial establishment of New Zealand's connection to the U.S. in 1990, NASA provided the majority of the support for the costs of the U.S. end of the link, but no subsidy was provided by the New Zealand government for the New Zealand end of the link. As a result, all the costs had to be recovered by charging the users. An agreement was made between the six universities that each site would pay for 1/6 of the start-up and ongoing costs to get the project established. A similar pricing scheme was used to establish the Kawaihiko network in 1991, where costs were divided in fixed proportions with Lincoln University paying for 1/13 and each of the other six sites paying 2/13 of the costs. (There are seven universities in the Kawaihiko network.)
In April 1992, when the entire Tuia network went under re-engineering, sites within the Kawaihiko were provided with the opportunity to pay for their own access costs. Netway Communications (an infrastructure provider), which provided the Frame Relay, charged a monthly fee for both the access and traffic costs. Sites within the Kawaihiko management groups could select their own access rates (i.e. speed) at different prices. Since some sites had more costly access fees than others, they agreed that each site would pay its own access charges. Moreover, access costs for sites providing common access for other sites (see Figure 6) were divided using a set of percentages agreed locally at each site. Traffic costs were still shared among participants as they were initially, since an accounting system was not yet implemented to monitor traffic volumes between sites.
The past success of a usage-based pricing for international Internet traffic helped to encourage the sites to initially share the start-up costs. They knew that once an accounting system was established, users eventually would only have to "pay for what they used".
Usage-based pricing was first implemented for international traffic, just after the NZGate connection was made in 1990. They adopted a volume-charge pricing scheme, with the following characteristics:
The notion of "committed traffic volume" provided users with predictability as to how much they would be charged per month. The pricing method was as follows: Each site made an initial choice of their committed volume, and thus their monthly charge. If a site's traffic fell into a different "charging step" for more than one month, that site's committed volume was updated to reflect the actual traffic. However, for that unusual month, the site would still be responsible to pay for their previous committed volume, whether their actual usage had changed or not. This provided a site at least a month's warning of a change in the monthly fees. Committed volumes were updated automatically by the NZGate Management, which simplified the administrative work. Because of the success of this volume-based pricing, sites within the Kawaihiko group, in particular, were willing to divide the costs for the initial establishment of the network with a view that a fair pricing scheme would later be implemented.
In summary, the key factors that brought about the success of usage-based pricing in New Zealand were:
The common pricing philosophy and mutual trust between and within the management groups were essential for both the initial establishment and eventual adoption of the usage-based system. The availability of a cost-effective accounting system, as well as a simple and "predictable" pricing system, further encouraged the implementation of a cost-effective "pay-what-you-use" system. Moreover, the existence of a single, dominant infrastructure provider, significantly simplified and reduced the accounting costs that otherwise would most likely make usage-based pricing cost-ineffective.
After the establishment of both the REUNA and Unired networks in 1992, both organizations were facing the problem of finding a proper pricing scheme to cover both the maintenance and development costs. It was quite difficult for the groups to come up with a solution. In fact, this difficulty actually led REUNA to select a very unreasonable solution. The heads of the member institutions of REUNA decided that all the network costs were to be split in proportion to the budgets of the institutions, with the exception that the international traffic would be charged at a per-megabyte rate. This of course brought about serious disapproval, and eventually forced REUNA to implement a flat rate with unlimited access for national traffic. However, REUNA still kept a usage-based pricing scheme for international traffic. Unired, on the other hand, implemented a flat rate pricing scheme for both national and international usage for their academic and non-profit customers. To recover some costs, commercial customers were charged heavily for international traffic, but were still provided the option of flat fees for national traffic.
In contrast to the New Zealand experience, the network in Chile found it difficult to implement usage-based pricing. The political competition and unreasonable pricing solutions in the past left both REUNA and Unired with no reasonable alternative but to charge flat fees with unlimited access. Any other pricing besides flat-rate pricing was not encouraged, in fear that an "unfair" and expensive usage-based pricing would be implemented. It has been argued that it would be difficult for REUNA even to implement a volume-based charging system for international traffic, especially since their competitor, Unired, had implemented a flat-rate system for its non-profit customers [31].
If, however, by reducing costs to the users, REUNA or Unired could gain complete market share, then they could implement a usage-based pricing scheme more easily. Alternatively, within a competitive market, a possible situation that would encourage usage-based pricing would be if the congestion was so heavy that people desired to have improved quality of service for real-time applications, for example, video conferencing.
Pros and Cons of pricing methods in New Zealand and Chile
The benefits which New Zealand customers experienced were virtually no congestion problems, and having to pay only for their own traffic and access fees, and not other's. Even until now, New Zealand does not have nor foresee any congestion problem [32] primarily because users are conscientious of their use of the Internet. Moreover, since the network has the ability to monitor traffic, areas with heavy traffic can be readily identified; and problems, ideally, can be quickly resolved. In addition, usage-based pricing may be more attractive to customers who do not use the Internet often, especially if costs are less than flat rates. Hence, usage-based pricing could encourage universal access. However, since Netway predominately possessed a monopoly of the New Zealand Internet infrastructure, the costs may not have been very cheap at all, and universal access may not be encouraged.
Another issue regarding Netway's virtual monopoly is that New Zealand may suffer from less infrastructure developments. From a 1995 comprehensive study conducted by Organization for Economic Co-operation and Development (OECD) [33], they concluded that generally countries which had less competition resulted in higher fees for the consumer and had less infrastructure and system developments. However, the issue of slow growth in development may or may not be a problem for New Zealand, since the major institutions such as Kawaihiko have aggressively requested for improved infrastructures. But it is generally true that the price for upgrading the infrastructure and quality of service in a monopoly will be more expensive than the case if a competitive market existed. In this regard, developments would be discouraged if costs were too high.
Fortunate also for Netway is that in addition to New Zealand's philosophy of "pay what you use", there is also the concensus that members should "pay for what you want". As a result, Netway does not have to carry the full burden of investing large costs for implementing new infrastructures. If an organization wants a special service, they must commit to a monthly access fee. For example, Waikato and Victoria desired a special 128kbps capacity connection between them. For this extra service, both groups were charged monthly for their access of that line. Hence, both the infrastructure and customers shared the burdened development costs.
In contrast, a market which is "too competitive", as in the Chilean experience, could be counterproductive. The extreme position due to past disagreements resulted in a non-connected local infrastructure between Reuna and Unired, and communication between the sites on different infrastructures must travel through the US. This is a complete waste of resources. Due to the unlimited access option, the infrastructure also suffered from heavy congestion. Indeed, competition forced the flat rate to be so low that it can only barely recover the costs. This resulted in less development and decrease in quality of service, since the funding for both the infrastructure and existing services was barely supported. Eventually, to recover costs, flat fees have to be increased [31]; but this discourages universal access since it may become too costly for people who do not use it often.
Table 1 below provides a summary of the pros and cons that must be considered when implementing a usage-based pricing system in relation to the New Zealand and Chilean Networks.
| ||
| ||
|
Table 1: Pros and Cons of
Internet Systems in New Zealand and Chile
In conclusion, to implement a usage-based pricing methodology in a monopolisitic and cooperative environment which desires usage-based pricing is not so difficult. In a competitive environment where disjointed service and infrastructures exist, a usage-based pricing system could be implemented by:
In order to realize a "universal" usage-based pricing system (or more generally, a non flat-rate pricing scheme) in a competitive market such as in the United States, one of the four situations listed above have to be present. However, the adoption of usage-based pricing most likely will not be immediate, but rather it will be accepted on a small and gradual scale. As the number of applications which require high bandwidth increases, users will demand for usage-based pricing schemes (again, more generally, non flat-rate pricing schemes) to avoid congestion problems. Hence, it is possible to see a gradual trend of acceptance of new pricing schemes in the United States, but it is unlikely that all people will require it and demand it, since users still enjoy the freedom of unlimited traffic access. Whether this gradual trend can survive under the fierce competition of flat rate pricing remains to be an interesting question.
Definition
To date, we can observe two different definitions of Internet Telephony - one more narrow than the other. A narrow definition is that it is the technology that transmits voice over the Internet. Many applications that exist today fulfill this definition of Internet telephony: RealRadio and Eudora, for example. The broader definition defines Internet telephony as the technology that provides integrated communication service of voice and fax over packet- switching networks and, in particular, the IP-based network. We will call the former model the PC-to-PC Internet telephony model and the latter the gateway server model (because there must exist a gateway between existing telephony devices and the Internet).
The latter definition also indicates that Internet telephony could become far more than merely a new application of Internet. It has the potential to challenge the existing biggest communication network, i.e. the Public Switched Telephone Network. As a matter of fact, in order to become commercially viable, Internet telephony has to be of significant size to achieve critical mass and acceptable performance. Voice and fax are the most prevailing applications of communication in the modern time and therefore a successful Internet implementation of these could become highly profitable. Whether Internet telephony becomes a success or failure depends greatly on how the technology and economics surrounding it interweave.
History
Prior to 1995, the market for Internet Telephony products was virtually non-existent. A few public domain programs created by researchers and hobbyists did exist, but were not widely used. These early programs offered very few features with poor sound quality. However, the potential of Internet Telephony was such that both industry leaders and entrepreneurial startups dedicated fairly large amount of efforts to its development so that they would have a foot in the business once it became a reality. This phenomenon is quite common in the recent development of Internet related technology.
With the introduction of a new class of products that significantly enhanced voice transmissions over the Internet, the situation started to dramatically change in 1995. As a result of the explosive growth of the Internet, and the consequent positive network externalities, desktop software offering computer-to-computer communication flourished. To date, dozens of companies have released Internet Telephony products that provide real-time voice communication over the Internet using computers at each end. By the end of 1995, there were an estimated 500,000 active Internet Telephony desktop users. However, the technology was limited, requiring each individual user to have a computer with some type of Internet connection and the same Internet Telephony software.
Architecture
The new generation of Internet Telephony applications allows people to use their existing telephone and fax machines. As shown in Figure 8, the telephones and fax machines are connected in the usual manner to a Public Branch Exchange(PBX). However, instead of the PBX being connected to the public telephone network, it is connected to the Internet via a gateway server. Because it is the PBX that is connected to the Internet through the gateway server, there is no need for each user to have an independent Internet connection. The Internet gateway server at the caller site sends the packets to an Internet gateway server at the callee site. The gateway server at the receiving end is linked to a PBX to which the receiving telephone is connected.
Figure 8: Internet Telephony Architecture
The market
The current telephone system has a high quality of service. Furthermore, the circuit-switched technology used by the telephone network guarantees that this quality is consistent. Pricing of local telephone calls is regulated such that long distance calls are priced very high partly to recoup lost revenue on local calls. These characteristics of the public telephone system leave a window of opportunity for new businesses that provide telephony service differentiated from Plain Old Telephone Service (POTS) by quality and price. For historical, regulatory and technical reasons, the Internet provides a data network with fluctuating quality of service and very inexpensive transmission rates. Therefore, Internet Telephony can take advantage of the characteristics of the Internet and fill business niches that the public telephone system is creating.
The most important market for Internet Telephony (at least initially) is the market of internal corporate networks. When used within a corporation's intranet, the technology is referred to as intranet telephony. Intranets typically have less traffic and better reliability than the Internet, so intranet telephony can provide good performance for both voice and fax. Industry experts estimate that faxes constitute up to 50 percent of all International calls. According to the Gallup/Pitney Bowes Fax Usage and Application Study, 48% of the fax transmissions within Fortune 500 companies are from one company location to another, and nearly 90% use stand-alone fax machines. Therefore businesses can potentially save a lot of money on long distance calls by using Intranet combined with Internet telephony for their faxing needs. Intra/Internet telephony also provides some convenient features for businesses, such as the capability of broadcasting faxes to a large number of recipients.
For home users, Internet telephony may be attractive for price conscious people that need to make many long distance calls. The successful adoption of the technology by this group depends on the quality and consistency of the voice delivered by the service, the price of the service and the people's tolerance for the lower or fluctuating voice quality. It is not clear what is the relation between the quality of the voice delivered and the price that the consumer is willing to pay for it, or even if the consumers are willing to pay at all for a lower quality signal.
The interoperability of the Internet telephony technology and the existing telephone system may result in a large number of new and useful applications. For example, a WWW home page, when viewed with an Internet telephony-aware browser, could conceivably offer a link on a web site that connects directly to a specified phone, or group of phones. For example, a person browsing on a web site could simply click on a "Talk to an Agent" button and, using the multimedia capability of that computer, have a voice connection with a service agent.
Costs
When talking about costs and settlement issues related to Internet Telephony, it is important to understand which one of the two definitions of Internet Telephony is being assumed. Under the PC-to-PC Internet Telephony model, the costs involved include the cost of the software used at both end computers, the Internet access costs, and the usual costs derived from the Internet usage, such as congestion (which is a cost shared by everyone using the network). There are no new infrastructure costs.
Under the Internet Telephony gateway server model, there are new infrastructure costs, since at a minimum there is new hardware required (the telephony gateway server) and its associated software. Under some circumstances in which companies may want to improve the service quality between particular locations, dedicated lines could be used, therefore elevating the infrastructure costs even more. In addition to these infrastructure costs, the gateway server model also requires Internet access costs, and the usual costs derived from the Internet usage, such as congestion.
New business opportunities
Internet telephony may offer new business opportunities for small communication companies that do not have the resources to deploy new infrastructures. This is especially attractive to NPs who are being hard pressed by the competition in the Internet access provider business. All the NPs are looking for more channels of revenues other than the mere flat monthly fee they currently charge.
One such business opportunity for the NPs would be in the area of Internet roaming, the ability to give nomadic subscribers a local connection to the Internet no matter where their travels take them. The big providers like AOL and IBM Global Network provide this service by extending their existing infrastructure, but this solution is not feasible for smaller NPs. Regional NPs can band together to create cross-border consortia offering local Internet access to roamers. One such group already is taking shape in the Asia-Pacific region, where nearly a dozen NPs are joining forces to handle one another's customers. Each NP can install a telephony server and the consortium immediately becomes a national or international telephone company.
At least one initiative is now in the works to get the Internet Engineering Task Force to establish standards for Internet roaming. This is a good example of how economics (network externalities) demands appropriate technology standards. Roaming technology standards could cover nuts-and-bolts issues like maintaining, exchanging, and updating the databases of participating NPs' local dial-in phone numbers, as well as accounting and settlement payments among NPs.
Conclusion
The characteristics of the existing telephone networks, current regulation, and the explosive success of the Internet have created many new niches and business opportunities. Using Internet telephony technology and price discrimination practices, businesses can exploit the new opportunities and fill those niches.
An obvious such opportunity is the versioning of telephone service. Businesses can offer lower-grade packed-switched voice service at a lower cost than the public telephone companies. The case of fax is interesting, because Internet-based fax may have some virtues that telephone-based fax lacks, such as multicasting, or storage at gateways so the caller's machine does not get stuck when lines are busy. While Internet voice service may be differentiated from the telephone, Internet fax service may compete head to head with the telephone system, and may result in interesting price movements or regulatory actions.
As with any other products, pricing of Internet telephony services is limited by the price of existing services or substitutes. What these alternatives are depends on the customer group that the business targets. For example, users who use telephony software on their PCs will compare the prices of a new Internet telephony service with the price of the software package that they are using on their PC. Therefore the price of such software limits the price that the Internet telephony service provider can charge if it intends to capture this group of customers. Similarly, pricing Internet telephony services is limited by the prices charged by the public telephone companies.
Internet telephony may be perceived as taking business away from the existing telephone service, instead of expanding the communication business. In this case, settlement issues become particularly relevant and complicated. Long distance companies will be very reluctant to have low-cost Internet telephony services use their backbones to carry packed-switched telephone conversations and faxes. This may give them an incentive to devise new settlement contracts to prevent the Internet telephony services from cannibalizing their business, or to develop content-aware backbones and switches to control, account and charge for the kind of data being transmitted over their backbones. Another option would be for these companies to lobby governments in order to introduce regulations that would protect their business.
Current Internet telephony technology is in its infancy. However it has the potential to be the technology that allows the implementation of an interconnected global communication network, combining today's telephone network and the Internet. Faxes, email, and voice mail may be integrated into a unified messaging format. However, such an unification will require the resolution of the issues that we have discussed in this case study, that is, the settlement and pricing among all the players, from the infrastructure providers to the NPs. Internet telephony may initially worsen the congestion on the Internet, creating yet another incentive for more infrastructure and more importantly, more economically viable pricing systems over the packet-switching network.
The evolution of the Internet and network computing has spurred increasing interest in the merging of new technology and potential standards. The advantages of having a common set of guidelines which facilitate a broad application base, interoperability, simple and unified system administration, end-user ease of use, and low cost of ownership[35] have been recognized, and while a commonly accepted standard does not exist at present, various companies and organizations are proposing open and propriety standards competing for acceptance. Recently, the idea of Network Computer (NC) has emerged and considerable debate and discussion has ensued ever since.
In May 1996, Apple, IBM, Netscape, Oracle and Sun announced that they were working on The NCTM Reference Profile[35], a set of guidelines for developing inexpensive network computing devices based on existing Internet standards. The profile is designed to promote the creation of a broad application base to run on NCTM clients. It does not specify a particular implementation of a Network Computer, nor does it preclude the addition of features and functions outside of the profile. The profile was finalized in August 1996 and is openly available for implementation.
In March 1997, Microsoft published a white paper[36] to promote its own version of Network Computer - the NetPC - together with Compaq, Dell, Digital, Gateway 2000, Hewlett Packard, Intel, Packard Bell, NEC and Texas Instruments. Accepting the Network Computer as the computing environment of the future, Microsoft promotes the "Zero Administration for Windows" (ZAW) Initiative and, in doing so, has stepped into the competition arena for control of the Network Computer standard.
Although there are differences in the detailed implementations and marketing strategies between these two proposed standards, the fundamental concept is the same - networked computers put together so as to provide low cost of ownership and administration and access to a wide number of easy-to-use applications.
Rather than analyzing which camp of Network Computer (NCTM versus NetPC) will win the competition, in this case study we are going to look at the concept of the Network Computer in general and discuss the network pricing, costs and settlements issues involved according to the focus of our paper. In the following discussion, "NC" refers to the general concept of Network Computer but not the trademark registered by Apple, IBM, Netscape, Oracle and Sun.
Before we go on, we will define and clarify the concept of NC.
The fundamental difference between NCs and PCs is that NCs are always connected to a network and the programs originate from the network server. So some people argue NC is just another version of dumb terminal. This is not totally true. With dumb terminals, the applications run on a server/mainframe, not on the desktop, which is why they are technically terminals and not computers. With a CPU, a few megabytes of memory, a network interface and an I/O interface, NCs are not just dumb terminals. Although they cost less to buy and support than a PC, they can actually run software. The generic term for these is thin client (as opposed to fat clients, which refer to PCs). (See Figure 9) They are called thin because they are generally less complex and less expensive than a PC. All thin clients have three things in common: they cost less to buy than a typical PC; they cost less to support than a typical PC; and they are stateless machines that rely on servers to store all volatile data and software. Thin clients are primarily designed to save administrative costs over the long haul. To this end, they are made with interchangeable parts that are easy to replace when broken because they do not store information persistently.
|
In a wider sense, an NC can be interpreted as a device that reproduces your computing environment anywhere. When plugged into a power source and a network, an NC presents the user's familiar computing environment from which the user can browse the Internet, send e-mail, and compose documents which can be saved both securely and privately back on the server. Inherent to the basic device is the ability to receive and send audio and video, subject to the availability of the bandwidth[37].
As the popularity of NC spreads to the general public (as opposed to the initial focus of promoting NCs to corporations), two important features, namely nomadicity and mobility, need to be considered. Nomadicity refers to the ability to access information from anywhere, whereas mobility refers to the ability to access information while on the move. These features will generate interesting issues in network costs, pricing and settlements, which will be discussed below.
With less local computing power and less (or even no) local storage, the production costs of NCs are substantially lower than PCs, both in manufacturing and maintenance. Manufacturing cost is out of the scope of this paper (it is not a network cost) and will not be discussed in detail here, but it is interesting just to quote the cost given by Acorn Computer Group - the cost of a basic NC box for TV Web surfing ranges from $198 to $240 [38].
As claimed by NC proponents, the maintenance cost of NC is also a lot lower than PCs because[39]:
According to a research conducted by The Gartner Group, the five-year business cost of owning a PC with Windows 3.1 is $44,250. It is $38,900 for Windows 95 and $38,400 for Windows NT. Therefore, on average the maintenance cost of a PC is around $8,000 per year. The annual cost of maintenance of an NC (accounting for the server and on average) is estimated to be $2,000 to $3,500. Obviously NCs have a great advantage in terms of costs. The direct consequence of low manufacturing cost is lower price. Together with the low maintenance cost, this will give an impact on the speed of the assimilation of this technology. Most PCs are used primarily for e-mail, word processing and database manipulation. All these tasks can easily be performed by less powerful computers and so there exists a market for NCs. The savings in maintenance cost introduces a strong incentive to companies to switch from PCs to NCs. Although there is a switching cost involved, quite a number of companies are either replacing their PCs or buying new NCs. For example, Sears agreed to buy 2500 machines from Boundless Technologies to replace PCs on token-ring networks in stores throughout the U.S. The assimilation of this technology will in turn boost the future development of better NC models.
Although the advantage of manufacturing and maintenance cost is obvious, one should not neglect the cost of changing technology. This cost could be enormous, especially when we want to support nomadicity and mobility. It can be broken down into the following components:
Pricing is obviously the most direct and effective way to recover costs. In the context of network pricing, we will look into two different kinds of pricing relationships: how infrastructure providers price NPs, and how NPs price users and services. The pricing of Internet software applications is also an interesting issue but it is not within our scope of focus in this paper.
The deployment of NC largely depends on whether the supporting infrastructure will be built; and in order to provide enough incentive for the infrastructure providers, there must exist a pricing mechanism by which the infrastructure providers will know they can recover their costs within a reasonable time. Currently, there is no direct interaction between the infrastructure and users of the network, and we would foresee that this will not change for the NC case. That is, infrastructure will not price users directly for building the network, but by either wholesaling (to NPs) bandwidth or retailing (to users and services, in which they make the role of NPs) network access services.
The problem is that there is no way for them to know whether they will be able to recover their sunken costs before they build the network. The network externalities of NCs are not as large as the evolution of the Internet, because NC is not the only way to access the network - not until applications such as Internet Telephony and Internet Video Conferencing become prevailent such that the "mobility" aspect would introduce network externalities. Even if someone does not have broadband and/or wireless access by NC, he/she can still use other means (such as dial-in connection) to connect to the network and can probably get what he/she wants. As a result, efforts trying to bring the market to its critical mass may not get the most favorable outcome. The pricing mechanisms thus become very difficult to design - how should one price in order to recover the cost effectively? First we will look at the "wholesaling", that is, how infrastructure providers will price NPs.
Currently the "wholesaling" of network bandwidth is done in a "flat-rate" or "fixed price" manner. Although it lessens resistance to market entry compared to the more complicated and conceivably more "expensive" usage sensitive and priority pricing schemes, it does not guarantee a certain percentage of subscription to the service. It has been argued that implementing usage-sensitive or priority pricing schemes will control the network flow better, but whether this should be implemented for the NC infrastructure before it is commonly available and acceptable is another question. We would argue the best pricing scheme of the infrastructure providers to the NPs is to follow the existing pricing scheme for the Internet. If it happens that, for example, the Internet infrastructure becomes technologically feasible to price according to the content, and this pricing scheme is widely adopted before NC broadband and wireless access is introduced, then it would be sensible to follow those infrastructure pricing schemes when NC is being introduced. Otherwise, by setting up a more complicated pricing mechanism which does not guarantee an increased subscription will only add burden to the costs. The pricing scheme should be set in a way that the extra cost of the supporting infrastructure is minimized.
It seems that none of the pricing schemes will guarantee the recovery of costs to the infrastructure providers. However, there is a possibility of gaining more NC subscription indirectly by extending low prices to the NPs. We expect the network access providers of NCs to be a subset of the network access providers of the Internet. By extending a sufficiently low wholesale price to the NPs for NC access, the NPs would be more likely to promote NC connections to their customers to maximize their profits. Nonetheless, there is still no guarantee that the costs of the new infrastructure could be recovered. And still, there does not exist too much incentive for the infrastructure providers to gamble in this uncertainty.
A way to overcome this problem is to introduce the concept of NC in a gradual manner, since a "jump start" of a broadband and wireless network does not seem quite feasible. In fact, gradual introduction of the NC has been happening in some telephone companies. For example, a technology called xDSL will be provided by PacTel at the end of this year giving broadband access with just the current unshielded twisted pair (UTP) phone lines. It is foreseeable, then, that similar technologies will be introduced sooner or later. Although they can only provide high speed downstream connections, the profits generated can certainly give a push to the ultimate plan of broadband access - fiber to the home as well as the new wireless infrastructure. As it takes time for a new product to get "diffused" into the market, another advantage of this model is that there will be more time for people to get to know NC and (provided the customer response is positive) this could assure the value of NC in the market place and give the infrastructure providers a greater incentive to make larger investments.
With regard to pricing of the NPs, the situation is a little bit different. As mentioned in Part III, the accounting procedures done by the NPs will be of a smaller scale and are more manageable. Therefore, pricing schemes other than flat-rate are not impractical to implement. With Internet traffic doubling every three months, the increase in capacity of the infrastructure will not be able to meet the demand provided future usage is consistent with the current rate until the year 2000. From the AOL case study we have noticed the overwhelming congestion caused by flat-rate pricing. We would expect flat-rate pricing in the future will only bring more congestion to the network, especially if NC become more popular and more people access the network with more bandwidth-demanding applications. In order to make NC a successful technology, the network traffic must be controlled properly. Therefore, usage-based, priority, congestion pricing schemes or some combinations of them have to be adopted.
Other than pricing, settlements can also be effective in terms of cost recovery when administrated properly. In this context, settlements may be viewed as a supplement of pricing - there are issues which current pricing models cannot solve effectively. We have to rely on some other means to recover some of the costs and properly introduce incentives for investment as well.
Since the Internet will naturally become part of the NC infrastructure, some settlement issues of NC will be inherent from those unresolved issues of the Internet, particularly the settlements for "dial-in access" and "pass through traffic" as we mentioned in Part IV. The problem of dial-in access is expected to become less serious as the technology of NC proceeds because people will start using broadband or wireless access instead. This switching means of access, however, will bring us to new settlement issues.
As we have mentioned, mobility could only be supported by a wireless infrastructure. A model similar to the current personal communication system will be employed - that means there will be cells allocated for frequency reuse.(See Figure 10) If we assume there will be a lot of network access providers and each of them can only cover a limited number of cells, it is then very possible that when a user crosses a cell boundary, there is a change of network access provider. (Although less probable, there could be a change of infrastructure provider too.) It will be very inconvenient for the users if they are required to pay both NPs, for example, in order to maintain their access when they move across a cell boundary. This situation will become more common when NC becomes more popular and the number of users increases - to increase the capacity per area the frequencies would have to be reused more frequently. This gives smaller cell area and hence there is a greater chance of crossing a cell boundary while a user is moving. Then it would be almost impossible for the users to keep track of the many network access providers.
This is not just a problem of mobility but also nomadicity - consider the case when a user travels to another part of the country which his/her own network access provider does not cover; he/she would then have to gain network access from another network access provider.
|
While we realize this kind of problem should be solved transparently to the user to maximize user convenience and user satisfaction, an interesting settlement issue arises - how should settlements be done for this kind of hand-off of NPs and infrastructure providers?
The simplest solution would be to do no settlements at all. One could justify this approach by arguing that the amount of hand-off traffic to each network access provider is approximately the same. This is how settlement is currently done in the Internet and cellular phone systems. In this solution, network access providers will come together to sign a multi-lateral agreement (in cellular phone terminology it is called "roaming agreement") of carrying any hand-off traffic and no extra settlements are required. Even if the NPs do not complain about the fairness when they suddenly have to carry some heavy traffic with no compensation, this solution will raise problems in the case where the unoccupied capacity of an area is not large enough to hold the hand-off traffic. More complicated settlement models could be applied, of course, which could include some accounting of the amount and type of the traffic, time at which the traffic is done, and the distance between the two network access providers. Rather than accounting for every single packet (which would be prohibitively expensive), the NPs can do their settlements based on a sample of the traffic, as is done in some European railway systems.
While an optimal solution still remains unclear at the moment, it should be realized that if this problem is not solved properly, it could result in under-investment in infrastructure and hindrances of the assimilation of this technology.
Now that the cost, pricing and settlement issues of the NC have been considered, we can now assess its impacts of/on technology. While the existing network technology will have an impact on the assimilation of NC technology, the introduction of NC will also pose an impact on the future design of the network technology itself. These impacts are summarized as follows:
Impacts of current technology on NC:
Impacts of NC on future technology:
Lessons learned, and, will NC be a success?
The lessons that we learned in this case study include:
We have observed the advantages that NC will bring, and the difficulties NC is facing with the current technology. So, to conclude, will NC be a success?
With what we have discussed, from the point of view of network costs, pricing and settlements, we would argue that unless the hindrance of the development of NC are resolved very soon, it would be difficult for the general concept of NC to experience an exponential growth and a "universal acceptance" in the near future. Rather, it is quite possible that the NC will start as a low cost "corporate-type" computing solution and gradually gain more features such as nomadicity and mobility when the infrastructure providers find enough incentive for investment, and when various pricing schemes and settlement models are resolved.
In this section we will discuss how the issues in our paper interact with other class groups' issues. Just as there are many ways to approach a problem, there can be many perspectives from which to view an issue. Up to this point, our paper has discussed the issues of networking technology in terms of network costs, pricing, and settlements. Now we will provide the readers with an idea of how these previously considered issues interact with other non-pricing / cost / settlement factors of the success of the technology. Sometimes it is a stretch, but for each other class group, we try to identify at least one interaction between that group's focus and pricing, costs and settlements.
Network and Path Dependent Effects (Group A):
The most obvious issue involving network effects in network pricing is the popularity of the low flat-rate pricing offered by several network access providers. As studied in our AOL case, a network access provider would need an average of 6 months to recover its network access costs by charging a flat-rate of $19.95 a month. The motivation for the flat-rate pricing is, especially for AOL, to capture market share in order to take advantage of the network effect.
As the technology evolves, we have foreseen there is a need for adopting a new pricing scheme in order to better control network usage. Whenever there is a switch from an old system to a new system, a switching cost is involved. In the United States, however, flat-rate pricing has prevailed and, as it becomes more and more widespread, it is relatively more difficult to change the popular pricing paradigm and, say, charge based on usage and congestion.
One last Network and Path Dependent effect: there is a "lock-in" effect in the development of infrastructure. Currently network accesses are mainly done via dial-in. While the equipment is geared towards this mode of access and the users are accustomed to paying a normal phone line's rate to gain access to the medium, the switching cost of going from dial-in to broadband home access could be huge. In fact this is the main reason why infrastructure providers do not have the incentives to invest in broadband home access and wireless infrastructures at present.
Human Factors (Group C):
As mentioned in Part III, while pricing schemes should help to control the network flow, they should not cause too much inconvenience to the users. Clearly any pricing schemes other than flat-rate pricing will bring some form of "extra" inconvenience to the users. For instance, if usage-based pricing is adopted, the users might find keeping track of the amount of their traffic inconvenient; and if congestion pricing is used, they might find that differentiating between the congested hours, non-congested hours and learning the prices charged at each point inconvenient. Unless these procedures can be made transparent to the users, it is arguable that many users will be reluctant to use a new pricing schemes, even if they would have to pay a little less compared to flat-rate pricing.
Human factor is also involved in terms of getting used to new pricing schemes and settlement models, when they are implemented. Productivity is expected to decrease while cost is expected to increase due to this factor. It may hinder the pace at which new pricing schemes and settlement models are adopted.
Collaborative Design (Group D):
Collaborative technologies have brought us exciting new network applications like "white board" and video conferencing. Along these lines, it is possible that the development of collaborative technologies could eventually enhance the successful implementation of different pricing schemes and settlement models. For example, the success of developing a network with symmetric links (the network links nowadays are mostly asymmetric, that is, with high speed downstream but low speed upstream) may make the accounting of packets much easier and cost effective.
Legal and Regulatory (Group E):
Our issues are in many aspects related to legal and regulatory issues. The most controversial one is the FCC regulation on the exemption of Enhanced Service Providers (ESPs). As discussed in Part IV, currently the Internet service providers do not compensate the telephone companies for their customers' dial-in access to the network. This has caused considerable complaining from the telephone companies because of the congestions at the "local loop". A call to a network access provider is on average much longer than a normal phone call and the phone companies would have to build larger switches and increase the network capacity to accommodate the normal traffic. Worse still, if the phone companies do not receive settlements from the ISPs, the ISPs can continue offering low prices to the users so that users will have no incentive to scale back the length of their access calls. Another negative result is that the phone companies also have neither the money nor incentive to further the development of broadband access to the home.
In addition, the government can play an important role in the implementation of new pricing schemes and settlement models. First, it can subsidize the network access providers and the infrastructure providers to aid the cost at the initial stage of implementation. Second, it can enforce interoperability of networks, which in turn will affect the settlement models. Third, the government could possibly enforce the development of content-aware or application-aware network architectures in the future (to deal with indecency, for example). Were this last enforcement in place, the pricing schemes would be affected because a content-aware or application-aware network can easily account for the type of packets being sent. Finally, it is possible that Internet access becomes as "necessary" as telephone service in the future. If the government were to impose legislation requiring "universal service" of Internet access, the pricing structures will most certainly be impacted.
Industrial Organization (Group F):
While people are discussing which pricing scheme and settlement model should be adopted, an interesting question arises: should a new industrial organization be formed to facilitate the adoption of these new pricing schemes and settlement models?
We can think of the following possibilities. First, an "external" organization could be formed to explictly deal with all the network settlements among network access providers and infrastructure providers. Second, some form of alliance between network access providers and infrastructure providers could be formed. For example, an alliance of infrastructure providers can be formed to standardize the next generation of infrastructure to facilitate new pricing schemes, or alliances of infrastructure providers and network access providers can be formed to ensure the smoothest possible transition.
From our case studies, we learned that industrial organization affects price scheme implementation in different ways: monopoly facilitates the implementation of usage-based pricing (New Zealand case study) and we observed that competition will tend to drive the pricing scheme to flat-rate (from the AOL case study). So we would expect that some kind of industrial organization is needed in order to carry out some new pricing schemes.
Inter-Organizational Design (Group G):
The issues related to Inter-Organizational Design are very similar to those related to Industrial Organization. The difference is in the way in which we identify the organizations, which we discussed in the previous sub-section, and the way in which companies will form the organizations.
Therefore, the related issue here is that when the network access providers and infrastructure providers see a need to collaborate so that settlements issues can be resolved more easily and properly, how will they come together and form an organization or alliance? A standard body, a consortium, a joint venture or a technology web?
Standards (Group H):
An infrastructure standard standardizing the degree of content awareness, should one be adopted either by legislation or de facto, would affect the possibility of implementating different pricing schemes. And surely, if different standards of infrastructure arise, it would be difficult to unify the pricing schemes and settlement models.
Similarly, it is feasible that standards for pricing users and services could evolve in the future and whether they will evolve and when they will evolve would depend on government decisions and economics of the network. For instance, since flat-rate pricing makes network access providers difficult to recover their costs, it is possible that usage-based or congestion will become standards because of the driving force of economics.
Finally, one last interesting issue to be addressed here is that the de facto standard of TCP/IP (IPv4) actually hinders the development of some pricing schemes, such as priority pricing, because it has no mechanism for differentiating packet priorities. With ATM and IPv6 under development, it remains a question as to whether TCP/IP (IPv4) is the best protocol for network transports.
It is clear from the recent explosion of the growth of the Internet that there is a heightened interest and awareness of the potential that the Internet can provide. Companies can reach more customers and penetrate more easily into new markets, users quickly can find what they need and shop at home, and friends can communicate with each other without having to worry about large telephone bills. While the usage of Internet has greatly increased, the number of businesses developing new applications have also grown dramatically.
All these seem to suggest the technology will have a bright future. However, even now, people are experiencing the growth pains of the Internet and realize that the network cannot grow indefinitely without any proper means of control. Problems such as network congestion, dial-in access and pass through traffic have become more and more apparent, and it is clear that these issues must be resolved in order to sustain the growth and expand the scope of the network in future.
One of the most important factors that governs the advancement, assimilation and dissemination of the future technology is economics. In this paper, we have studied how pricing and settlements can act as an effective means to recover the high sunken cost and control the usage of the network. We realize that pricing schemes and settlement models have to be implemented properly in order to correctly guide the future investment and development of the network.
Flat-rate pricing, although currently dominating the market, is not an efficient pricing scheme. We have learned from the AOL and the New Zealand and Chile case studies the problems that flat-rate pricing could bring us. In spite of the quick initial assimilation and dissemination of technology brought by flat-rate pricing, it threatens the development of the technology in the future.
Eventually, as the interest for real-time and bandwidth demanding applications (such as Internet telephony) increases, there will be a greater demand for better quality of service; and we foresee some form of usage-based, priority and congestion pricing (or combinations of these) both at the network access provider and the infrastructure level. We realize that the implementation in the latter case is more difficult and may not be economically feasible because of the huge accounting overhead an infrastructure provider has to bear. Nevertheless, it is noticed that these new pricing schemes would most probably be successful when:
While these are not likely to happen in the near future, in the short run we expect some form of settlements to be a useful way to ease some of the problems, such as dial-in access. This, however, may have to be enforced by regulation.
The future of the network technology is stimulated by the introduction of the concept of Network Computer, where issues of broadband and wireless access come into concern. By analyzing the pricing, costs and settlements of the Network Computer, we undoubtedly believe that they have a great impact on the future technology. The viability and feasibility of "fiber to the home", "universal service" and "mobile computing" will all depend on the costs and on how pricing and settlements are to be done. Of course, these new technologies will impact the current pricing and settlement models as they are being developed.
While realizing the importance of network pricing, costs and settlements on the advancement, assimilation and dissemination of the technology, we should not neglect the roles played by other factors such as human factors, legal and regulatory, and industrial organization. Only by taking all factors into account and resolving them one by one, can we make the deployment of the future network technology successful.