1. Introduction
They are all around us these days: radio waves. We have learned to make use of them in a diversity of services, stretching from radar and air-traffic control over mobile telephony to satellite communication and Wi-Fi networks. It is hard to imagine a modern society without them, as they are integrated into so many different technologies on which we depend heavily. Recently, it has also come to the attention of the general public that radio waves are a scarce resource making up the electromagnetic spectrum. This realisation has partly been due to the fact that pricing strategies for spectrum access have been put in place, not infrequently with the argument of scarcity. The underlying idea of this marketisation of the spectrum is that supply and demand can balance scarcity and result in the scare resource being used for what is most highly valued. Whatever the consequences of such a paradigm, it is clear that a price tag makes the resource more visible. This is also true for an invisible resource like the spectrum.
Whereas pricing of the spectrum is a fairly new phenomenon, the scarcity of the spectrum is not. On the contrary, as early as the 1920s the demand for radio waves was higher than the supply, leading to what contemporaries called “chaos in the ether”. The new communication service that came to be known as broadcasting soon became extremely popular, and transmitters mushroomed in (almost) every corner of Europe. However, the lack of coordination and control quickly made listening difficult. The situation can be compared to a cocktail party; in the early evening it is easy to carry on a conversation, but as the room fills, listening becomes difficult as people raise their voices in order to be heard. Yet within a few years order was restored to the European “ether”. One of the aims of this paper is to explain how that happened. Another is to understand why it worked.
One way of approaching this issue is to assume that the radio spectrum is a common pool good (Ostrom 2005, 24) where two activities are going on. One is the transmission of content and the other is the reception of that content. As production increases, crowding occurs. The radio spectrum is at this point in fact an open access commons in the meaning that Hardin (1968) gave it. This crowding is peculiar since it takes place both in a geographical dimension with transmitters close in space, and in a frequency dimension with broadcasting on adjacent frequencies in the electromagnetic spectrum. The production process creates externalities of interference that effectively remove frequencies from the available pool. The central question of this paper is to analyse the institutional design that enabled a resolution of the externalities resulting from the technology-dependent production process. I propose to investigate whether the design principles identified for institutions of long-enduring common-pool resources are also relevant in this case (Ostrom 1990). Moreover, I would like to analyse if and how technology is connected to the institutions created.
The design principles are important both as analytical instruments for understanding the past and present and as tools for practically managing resources. The principles originally formulated by Ostrom (1990) more than 20 years ago have passed many “tests” and Ostrom (2005) suggested only minor revisions. One of the contributions of the present paper is to test the model on a new type of resource, the electromagnetic spectrum.
The identified design principles are eight in number, even though they regulate more than eight conditions. Briefly, the first principle demands that the resource and those belonging to the group of users, or appropriators, have clearly defined boundaries. The second principle identifies the need for the rules of appropriation to be tuned to varying local conditions. Third, those affected by the rules should generally also be able to modify them. The fourth principle states that those monitoring appropriators’ behaviour should either be the appropriators themselves or accountable to them. The possibility of imposing sanctions on rule breakers has been identified as a fifth principle, and access to cheap conflict-resolution mechanisms constitutes the sixth. The seventh design principle advocates the right of external bodies to organize. Finally, the eighth principle, which primarily concerns complex resources, identifies the possibility for multiple levels of governance, introducing the term “nested enterprises” to refer to such situations (Ostrom 1990, 90).
In the following, radio service is introduced with some historical background. Then follows the case in point, which is a discussion of the institutional history of that portion of the spectrum used for broadcasting in Europe, identifying and treating the design principles listed above, as they pertain to this resource. In the discussion, the relation between the institutions and the technology is explored, and the paper ends with a short comment on the market paradigm.
2. An introduction to radio communication
The history of broadcasting has been told a number of times and is not the primary focus of this article. However, a few developments that are crucial to the arguments in this paper need to be described.1
Radio technology, or wireless, as it was often called in reference to the already existing technology of wired communication, emerged during the second half of the 1890s, with successful long-range transmissions in the mid-1890s. Its usefulness was soon realised and as early as 1903 the first international conference on radio took place in Berlin. It was followed by another conference three years later, by which time the number of participating countries had risen from nine to 29. Delegates from most of the participating countries signed the Convention and the annexed Radio Regulation, which provided more detailed rules for better communication (Codding 1952). The increasing number of coastal stations and the use of radio for maritime services eventually called for another conference, which took place in 1912 in London. The conference took place just a couple of months after the Titanic had sunk, which served as a forceful argument for the even further extended use and regulation of radio (Douglas 1987).
Broadcasting was introduced gradually from 1920 onwards and quickly became very popular and influential (Douglas 1987). Today, broadcasting is probably still the best known of all radio services, even though mobile telephony might be catching up in parts of the world. The great interest of the early days led to a boom in new radio stations. However, the Radio Conference in London had limited bearing on broadcasting, even though there were articles stating that no radio station should interfere with any other station. Thus, when broadcasters spread over the continents, broadcasters used the wave lengths that seemed available and which best served their interests, normally between 300 and 500 m, corresponding to 600 kHz–1 MHz in what we today call the Medium Frequency Band.2 The result was severe crowding of the used part of the radio spectrum. The early efforts to cope with this crowding are the focus of this article.
3. The Geneva plan of 1925 and the principle of boundaries
Very soon interference was recognised as one of the biggest problems of broadcasting. The Swiss radio enthusiast Maurice Rambert, who was the first Swiss to have a permit to broadcast in his country, started his service in October 1922. The following year he proposed an international meeting, which took place in Geneva in April 1924. One year later, the International Broadcasting Union was founded, or UIR (Union Internationale de Radiophonie) as it was more often called, French being the working language of the Union (Briggs 1961; Spohrer 2008).
Representatives from broadcasting companies from 10 nations, all European, were present when the UIR was established in Geneva on the 3rd and 4th of April 1925. The founding members were companies from Austria, Belgium, Czechoslovakia, France, Germany, Great Britain, the Netherlands, Norway, Spain and Switzerland. At the second assembly in March 1926, companies from the following countries formally entered: Denmark, Hungary, Italy, Sweden and Yugoslavia. The first non-European companies were allowed to enter in 1927. They agreed to put the seat of the UIR in Geneva and to elect a Council of nine members to direct the Union. The Council was to meet four times a year and the Assembly at least once a year. Admiral Carpendale of the BBC was elected president of the Assembly, and H. Giesecke from the Reichs Rundfunkgesellschaft in Germany and Robert Tabouis of the French Federation of private broadcasting station were elected vice presidents. A permanent office was established and A. R. Burrows, also of the BBC, was appointed director.3
The UIR sought to establish connections between European broadcasting companies, and eventually with broadcasters in other continents, with the larger aims of defending the interests of these companies and working for the growth of broadcasting. The most immediate questions had to do with exchange of programmes, statistics, and knowledge, and issues of copyright and programmes as private property. However, as the organisation’s first historical account noticed, the “most urgent problem” was the allocation of wavelengths. And to solve this problem, a conference of European engineers was scheduled in July 1925.4
The engineering conference, which met at the League of Nations in Geneva even though the UIR had no official standing within the League, was chaired by B.B.C. chief engineer P. P. Eckersley. He proposed to consider only the wavelengths between 200 and 600 m, which was also the decision of the conference.5 This shows that the boundaries of the resource in question were clearly defined, in conformance with the first design principle that Ostrom has identified as central for achieving robust, sustainable common-pool resource institutions (Ostrom 1990). The other element of this double principle (Ostrom et al. 2002, 49), was the need for clear boundaries of the group of appropriators. Yet, as will be evident below, the boundaries of Europe were not as clear as the boundaries of the resource, which led to conflicts on how to divide the resource.
It was soon evident that it was impossible to fit all existing stations into the proposed wave band between 200 and 600 m (excluding initially 200, 300 and 600 m since they were already in use for maritime services). The 14 delegates provided a list of 126 stations, of which 38 were projected (and hence not yet put into operation).6
A sub-committee worked out the actual plan, which was to be presented at the next meeting. The sub-committee made a number of suggestions to limit interference between stations, which were then adopted as recommendations by the conference:
- action should be taken against transmitters which produced harmonics deviating from the transmission wavelength
- permits should not be granted to a station deviating more than 0.33% from its wavelength
- stations with a power exceeding 2 kW should not be placed closer to other stations than 1500 km and 10 kHz
- amateurs should not be allowed to transmit unless they could show “thorough scientific knowledge and enough technical skill” to operate their equipment, and finally
- no new spark or arc system would be taken into operation.
The conference also agreed on a definition of transmission power.7 The recommendation was to be communicated to the member nations via the League of Nations’ Organisation for Communication and Transit.8
The agreements on technical details illustrate the state of the art of radio technology, and guided the overall efforts to use the spectrum efficiently. The decision to take action against those who disturbed others by producing harmonics or deviating too much was a step towards a system of sanctions. The possibility of imposing sanctions is the fifth design principle identified by Ostrom, but at this point it was merely an idea, and it was not clear what transnational action could be taken against breaches of agreements. Forbidding new spark transmitters, which disturbed others immensely by their operation, or shutting out amateurs who were not knowledgeable enough, are also indications of the sensitivity to technological performance; certain technologies and/or their operators were simply not allowed and were shut out from the production process. Finally, the fact that both the power and separation of channels were regulated was perhaps the most striking example of how technological performance at the time was essential to the division of the resource.
After night trials with calibration signals transmitted from the Eiffel tower, the engineering conference convened again three months later. Views differed on the success of these trials, but agreement was reached that a separation of stations by 10 kHz was good. The actual work with the plan was referred to a Technical Commission, which had the same composition as the preceding sub-committee, but which was made a permanent committee of the UIR in March the following year and hence was given a more formal status.
The final plan was proposed in December by the Technical Commission. Three factors would decide the number of stations a specific country was entitled to, namely area, population and economic development, the latter calculated as telegraphic and telephonic traffic based on existing statistics. In this way the problem of wave allocation could be transformed into “a simple arithmetic task” as the Swedish delegate put it to his colleagues in an article the following year.9
- N=fraction of number of stations
- A=area of country divided by total area of Europe
- B=population divided by total population of Europe
- C=telegraphic and telephonic traffic divided by total traffic in Europe
- N=(A+B+C)/3
As I have shown elsewhere, this equation meant that the number or stations to which, for example, Great Britain was entitled, depended on how large Europe was deemed to be (Wormbs 2008). A large Europe meant that each country would constitute a smaller fraction of the total. The final plan applied to “that part of Europe lying between the meridian 7°30′W of Greenwich and the meridian 32°30′E of Greenwich”.10 This meant chopping off the most western parts of Ireland and Portugal and including all of Finland, Kiev, Odessa and Antalya to the east. This meant that Europe now consisted of only 29 countries: Austria, Belgium, Czechoslovakia, Denmark, Estonia, Finland, France, Great Britain, Germany, Holland, Hungary, Ireland, Italy, Latvia, Lithuania, Norway, Poland, Rumania, Spain, Sweden, and Switzerland were added Albania, Bulgaria, Greece, Luxemburg, Portugal, western Russia, European Turkey and Yugoslavia.
This situation illustrates the point made above concerning the boundaries of the resource. Even if the frequency band was clearly defined, the geographical area concerned did not follow from that in any predestined way. As the borders of Europe changed, so did the number of appropriators. But the resource remained the same. This is a peculiarity of the radio spectrum and is partly connected to the fact that it is not located like normal resources. It is anywhere and everywhere.
There are 98 wavelengths available between 200 and 600 m (if 200, 300 and 600 m are omitted, since maritime traffic was reserved for those) and if a separation of 10 kHz is assumed. However, as agreed on from the beginning, stations close in wavelength should be separated geographically and vice versa. Two stations with wavelengths separated by 10 kHz would have to be placed as least 1000 km apart. On shorter waves, however, the distance would have to increase, since their reach was longer. The actual plan was then made with a map, a thread, and a box of pins. The thread was attached to the first pin, placed in a corner of Europe and with the appropriate length. It would then describe an arc on which the next wavelength could be placed. The procedure would then be repeated from there until all pins, representing radio stations, had been placed on the map.
The original idea of having only exclusive wavelengths, i.e., wavelengths to which the user had exclusive right and did not have to share, was also abandoned for the final plan. As a memorandum from the Norwegian delegation pointed out, the scattered population of Norway, living along fjords with difficult transmission features, would not have radio if only exclusive wavelengths were allowed. Smaller stations with limited reach would have to be allowed in order to cover Norway.11 Hence common wavelengths were introduced, which also allowed for a growth of the European broadcasting system. The geographical separation of these stations was also important.
The introduction of common wavelengths was central to the sustainability of the plan since it allowed for more flexible use. However, it also illustrates the second design principle, which concerns the match between rules of appropriation and local conditions. By permitting common wavelengths parallel to the exclusive ones, transmission characteristics could be improved in topologically difficult parts of Europe, as the Norwegian example has showed. On a general level, the mere existence of the UIR and its work for a European frequency plan is an illustration of this principle. In the United States, for example, allocation of wavelengths was managed in a totally different manner of control and command (Aitken 1994; Slotten 2000).
Common wavelengths were important not only because of local geographic conditions but also for economic and cultural reasons. Broadcasting was organised differently in different European countries. There were countries which had state monopolies and others with dual services (in which case that country might have two members in the UIR). Some were well developed and extensive and some had only just begun organising and broadcasting. On one point they were similar, however, and that was sharing the idea that broadcasting should be a national service. In fact, one can argue that one of the main reasons for creating the institution was to keep broadcasting national.12 The different organisational forms of broadcasting on a national basis meant that there sometimes had to be a distribution of wavelengths also on the national level. In other cases, bearing in mind that monopolies were not uncommon in Europe at this point, the wavelengths allocated to the nation were the same as were given to the broadcasting company. As the main focus of this paper is on transnational institutions, I will not go further into the different national solutions.
The Norwegian example, moving to change a set of rules in order to enhance the performance of the institution, i.e., the plan for wavelength allocation, also illustrates what Ostrom has characterised as collective-choice arrangements. This third design principle states that those affected by the operational rules should be able to modify them. Most participating appropriators in this case did have an opportunity to take part in changing day-to-day rules. The factual impact of each appropriator differed, however, which is not very surprising. The level of know-how would at this stage be one of the factors that contributed to the weight given to a proposal, but also economic or political power. The design of the formula for dividing the exclusive wavelengths is to be seen as a way of trying to have a fair allocation, regardless of national power.
The plan was proposed to the Council in March 1926 and was accepted at a meeting in Paris in July 1926. However, it was not until November that it was officially put into place. The reason for this delay was mainly that wave metres had to be installed at all exclusive stations to insure that their wavelengths remained stable. But the Geneva plan also had to be ratified by national governments before acquiring any type of formal status, even though it should be stressed that it was still not binding according to international law.
When the Technical Committee became a permanent committee of the UIR, it was also entrusted with supervising the application of the plan and engaging in studies of importance for broadcasting.13 A Technical Centre was established in Brussels and the Belgian engineer and radio pioneer Raymond Braillard was made chairman of the Committee and head of the Centre, which employed a small technical and administrative staff. The Technical Committee had as its raison d’être to monitor and measure adherence to the Geneva plan. It also manufactured frequency meters to ensure that individual stations kept their designated wavelength according to the plan. As Andreas Fickers has shown in his unpublished work on the Technical Committee (2008), Braillard wrote continuous reports on the state of the frequency situation in Europe. As will be discussed further below, this institution was central to the possibility of assessing sanctions. And it was heavily dependent on existing monitoring technology.
4. The 1927 Washington conference and the right to organize
As mentioned above, the UIR was international but non-governmental, giving it a different status from the League of Nations or the International Radio Telegraph Conference. The members were not nations but organisations, which were not in the position to make any binding transnational decisions on frequency planning. Hence, it greatly mattered for the UIR what agreements were made at the next plenipotentiary Radio Conference, which after much ado convened in Washington in the fall of 1927. Preparatory meetings were carried out under the umbrella of a European wireless Engineers’ conference in Brussels in January 1927. In these meetings the USSR also took part in an effort to safeguard the existing plan.14
More than 80 nations were present in Washington and almost as many companies and organisations, although the latter had no right to vote. The USSR, however, was not present, not having been invited by the US, since the US government had not yet recognised it. Still, the USSR made proposals to the Washington conference which it had the right to do, since it had taken part in the 1912 London agreement. These proposals were treated together with other proposals. But the USSR never ratified the final convention and was hence under no obligation to follow it (Codding 1952, 116–117; Tomlinson [1945] 1979). The fact that the US did not invite the USSR to Washington has been taken as a proof of the non-neutral character of the International Telecommunication Union (ITU). It has been argued that what might be regarded as a technical organisation was in fact political all along (Noam 1992, 295). On a general level this conclusion is certainly true, yet even if the USSR had been invited, there is ample proof that frequency discussions in general are not neutral. And as far as the right to organise is concerned, the exclusion of the USSR can be regarded as a breach of that right. The seventh design principle treats the recognition of rights to organise in stating that it cannot be challenged by external governmental authorities. The Washington conference could be viewed as an external governmental authority in excluding the USSR. However, as we shall see below, the USSR was to become part of a regional effort where it kept its right to organise.
The Washington conference was important in many respects. It basically established the institutional setting for international radio regulation still in force today (Codding 1952; Tomlinson [1945] 1979). The allocation of bands for specific services was one important achievement, as was the creation of the International Radio Consultative Committee (CCIR). Of specific importance is the way in which the Convention gave every government the right to use a frequency as long as it did not interfere with already existing services. This signification of the Convention meant that in theory, every government retained its autonomy over the ether. In practice, however, overcrowded frequency bands made it very difficult to establish new services, despite the autonomy of governments. For the purpose of this article, the agreements on broadcasting are of special interest. The above signification of the Convention was in effect a threat to the work of the UIR. In fact, the Washington conference as a whole can be viewed as a challenge to the right to organise since the status of the UIR was not official and hence the rules agreed upon by the members could be overthrown by the Washington Convention. In the end, however, the possibility of having regional conferences to deal with regional issues was allowed.
The Washington Convention stated that the band for broadcasting in Europe should be 200–545 m which was a reduction in comparison with the band actually used for broadcasting. This decision did not come without intense debate. It shows that the power to define the boundaries of the resource was hence in the hands of an entity – the Convention – external to the group of appropriators – European broadcasters. The boundaries were, however, clear to everyone.
This reduction of the wave band treated in the Geneva plan called for a revision of that plan. The UIR called another European Conference of Wireless Engineers in Brussels the following year with the mission to revise the Geneva plan. By reducing the separation between the wavelengths from 10 to 9 kHz in parts of the band, the stations could be fitted into the smaller wave band. The number of common wavelengths was also reduced in favour of exclusive wavelengths (Heimbürger 1974). The plan came into effect in January 1929 (Tomlinson [1945] 1979). It is noteworthy that the separation of wavelengths was decreased in Brussels, reflecting technological change as well as increased demand. In effect, the resource was growing with technological performance.
5. Organisational hierarchies and extended monitoring
Things were not all settled with the Brussels plan. UIR had, as mentioned above, no formal standing within the radio convention. In Washington, the Czechoslovakian Telegraph Administration had proposed a regional plan administered by the European post, telegraph and telephone administrations. It invited the European PPT administrations to Prague where yet another plan was to be agreed upon. In preparation for Prague, and by invitation, Braillard submitted a report based on the work at the Technical Centre of the UIR where it was clear that 72 out of 209 operating stations were not adhering to the Brussels plan. As Tomlinson points out (181–182), not all of these stations caused severe interference, but for those that did, something had to be done.
The problem with the stations of the USSR was resolved by placing them in between already existing stations, which meant that they were separated from the existing stations by 4, 5 kHz. This was deemed possible since most of them were geographically located at a great distance from the other European stations. The wave separation was also set to 9 kHz in the lower part of the band, allowing for more space. However, there was also a discussion on the possibility of arriving at a separation of 10 kHz again some time in the future. This might be done by reducing the number of common wavelengths (Heimbürger 1974). These speculations indicate that the separation of 9 kHz was seen with some scepticism and that it might have resulted in some unwanted interference.
In Prague, the status of the UIR was discussed and agreed upon. It was decided that the UIR would function as an expert body, inviting to their meetings administrations and companies concerned. The UIR was also to be consulted before any changes were made in the frequency plans (Heimbürger 1974; Tomlinson [1945] 1979). This cleared the status of the UIR, and added to its importance. It became part of a governmental system with an expert role and continued to play a central part in making detailed frequency plans for Europe.
Frequency planning in Europe in the 1920s started with the work of the UIR on the Geneva plan and ended with the national PTTs and the Prague plan, illustrating a hierarchy of organisations involved in managing the spectrum. These institutional changes in several layers fit well with what Ostrom has called nested enterprises, identified as an eighth principle and occurring mainly within commons of greater size. The concept of nested enterprises means that institutions and rules work on different levels. The example in question includes the telegraph administrations, which ratified a radio convention; the UIR itself, which designed the institutions; and the Technical Committee and Technical Centre, which were responsible for monitoring.
The Technical Committee and the Technical Centre played a crucial role when it came to the question of monitoring. Monitoring has been identified as the fourth design principle. Ostrom’s hypothesis is that a system performs better if those doing the monitoring are the appropriators themselves or accountable to the appropriators. The Technical Centre was an institution established as part of the UIR, and thus accountable. It was seen as a necessary tool to make the transnational agreements work, and it continuously kept track of appropriator behaviour, a task dependent on accurate monitoring technology. Moreover, the appropriators monitored each other as well, by reporting whatever disturbances they encountered to the Office or Centre, and asking for action. This was a way of solving conflicts about who was to broadcast on which wavelength, and it points to the existence of a conflict-resolution mechanism, which has been identified as the sixth design principle needed for robust institutional design. In the early years of this period, the head of the Technical Centre or the Office simply wrote to the manager of a station that did not keep its wavelength stable or whose transmission power was too high.
Monitoring became a more thorough practice in 1927. According to the official history of the UIR, “it became apparent at the beginning of 1927 that daily supervision of transmitter stability was absolutely necessary for the proper functioning of the European network”. The UIR called for daily monitoring, immediate intervention, accuracy of measurements, and also publication of results. This last point is intriguing because published reports of frequency violations were deliberately intended to function as a form of moral leverage in the absence of international sanctions.15 The question remains if these official reports on freq-uency violations can be said to be a form of sanction. In small scale societies, shaming has been found to work well as a sanction. The community of broadcasters might be compared to a small society, unified by ideas of engineering efficiency and precision. Andreas Fickers (2008) has argued that the Technical Committee and the Technical Centre in Brussels were “extremely successful” since interference was heavily reduced between 1929 and 1932. In comparison with the US, the situation in Europe showed a much higher proportion of stable transmitters, which was a major problem in broadcasting. Judging by the effectiveness of monitoring for reducing interference, this principle can be said to have been implemented.
6. Discussion
The case of frequency allocation for broadcasting in Europe in the 1920s shows the establishment of institutions that meet the eight design principles identified as central for robust and sustainable management of common-pool resources.
Of critical interest, however, is whether and how technology mattered for these institutions and for the use of the resource.
A first distinction should be made between different technologies, as radio broadcasting is a system comprising several parts. One important component was the transmitter, in which power and stability were of the essence. The measurements carried out by the Technical Centre in Brussels show that transmitter stability increased over the time period covered in this case. Stable transmitters allowed for more predictability when it came to performance in relation to the frequency plan.
Transmission power was also important. During the time period discussed here, the transmitting power of radio stations increased. The maximum power of a station was regulated in the different plans, reflecting the general evolution of radio technology. This evolution had a double effect: greater transmission power meant larger receiving zones, but at the same time greater risk of interfering with other zones. The measures made by the Technical Centre, however, show that interference actually decreased. This is partly explained by increased transmitter stability, discussed above, but most likely also with how reception changed over time.
When judging reception quality, the types of receiver that was used mattered, and over the period covered in this study, radio sets improved constantly. When listeners moved from crystal sets to vacuum tubes, the signal quality was enhanced and the listeners were less frequently subjected to disturbances of different kinds. (Here it should be mentioned that AM transmission, which was the prevailing modulation technology, was vulnerable to unintentional emissions from other electric devices, such as neon signs or electric trams. Limiting these disturbances was a continuous task on the national level.)
Thus, the technological performance and capabilities directly influenced the institutions put in place. And there are indications – such as channel separation – that technological changes were altering the division of the resource already in the 1920s. This suggests that analysing the radio spectrum, which has so far not been a central object of study within commons research, can contribute to a deeper understanding of commons (Ostrom et al. 2002, 477). Technology clearly affects both institutions and appropriation. Yet the same could be said about most CPRs; few are void of technological solutions. What is so special about the electromagnetic spectrum? One unique feature could be that appropriation of the resource consists of transmission as well as reception, with both dependent on technology. Comparing this to other commons, one may talk about production and harvesting of the resource as being two technology dependent activities, not unlike activities in classic commons.
However, the fact remains that the radio spectrum has some very particular features compared to commons normally considered, like grazing land, rivers, irrigation systems, or fishing waters. Even though almost any commons demands technology for its appropriation, in the case of the radio spectrum, the resource is not even visible to us without radio technology. Another feature is that the resource is global in some frequency bands but local in others, and transmission quality varies not only with the time of day, but also with geography and topology. Third, the resource is non-depletable. While in use, it displays commons properties, but when it stops being used, it immediately becomes pristine again.
One could argue that fishing grounds and grazing land are similar to the spectrum, in that they are all in some sense renewable resources, which involve crowding in contexts of “overuse”. When fishing or grazing stops, fish increase in number again (if not fished to extinction), and grass grows back. However, one central difference here is that use of the spectrum does not destroy anything material. It is true that the communication for which the resource is used creates externalities that effectively destroy communication. But the resource itself is not destroyed. The term renewable implies a process involving time, during which the resource gradually improves or is renewed. When use of the spectrum stops, however, no time is needed for it to become pristine again. Therefore, the term non-depletable is more appropriate to characterise this resource, even though, as apparent from the case, its use is still subject to overcrowding and diminishing returns.
The fact that the radio spectrum is non-depletable becomes even more intriguing when put together with technological change on a longer time scale than that of this particular and limited case. When we move in history closer to our own time, technological change is of utmost importance for institutional design and use. Whereas technological change very often leads to the more rapid depletion of resources, this is not the case with the radio spectrum. Instead technological change has allowed for more efficient use of more parts of the spectrum for communication purposes. The first allocations in 1912 concerned frequencies up to 1 MHz. In 1927 frequencies up to 23 MHz were allocated, in 1938 up to 200 MHz, in 1947 up to 10,500 MHz, in 1959 up to 40,000 MHz and in 1979 up to 275,000 MHz (Levin 1971). Not only has the move upwards in the spectrum meant the emergence of new services and organisations, but new transmitting and receiving technologies have also brought change to old institutions. The strict division of the spectrum, where one band is reserved for one service, is partly dissolved, which means that wireless access can be offered in the same bands traditionally used for television.
This paper treats the electromagnetic spectrum as a resource that displays commons properties when used. The prospect is that a better understanding of this type of governing might facilitate present and future decisions on spectrum management. As indicated above, the idea that the market is the best way to ensure efficient use of the spectrum has gained momentum recently. The discussion on introducing markets into radio frequency allocation began in the 1950s, with the now classic, 1959 article by Ronald Coase, in which he criticised the Federal Communications Commission in the US for their spectrum policy (Coase 1959). In an influential study by economist Levin (1971), the issue was brought up again. However, it was not until the 1980s and even more so in the 1990s, that market solutions, in the form of auctions, were introduced (Noam 1998). This happened not only in the US but also in the European context. Well-known examples are licenses for commercial radio or spectrum for mobile telephony.
The market solution was thus not standard practice during most of the 20th century. Moreover, it is still not comprehensive, excluding countries, services and frequency bands. Nevertheless this logic of demand and supply has formed the present discourse on how to use the radio spectrum, which in turn might limit our understanding of how this resource can be efficiently used.16 A better understanding of the spectrum as a commons could balance the paradigmatic market view and make for well-informed strategies for future spectrum allocation. This is even truer against the background of voices calling for a new regime built on intelligent devices able to sort information from noise (Benkler 2002), which is now challenging the market paradigm.
7. Conclusion
The institutions constructed to manage the medium-wave band used for broadcasting in Europe in the 1920s exhibit the same design principles as those identified for long-enduring common pool resources. Ostrom’s design principles have been validated before and proven robust. But they have not been tested on a resource like the electromagnetic spectrum. Use of this resource is heavily technology dependent, and the history of this case shows the significant extent to which the state of the art of radio technology directly influenced management institutions. Even though use of the radio spectrum creates externalities that result in crowding and diminishing returns, it instantly becomes pristine when the use of it is stopped. The fact that the radio spectrum is non-depletable means that improved management produces immediate gains and effectively expands the resource.