Driverless cars have survived the road, but will they survive Congress?
With the recent crash involving a Uber operated self-driving car in Arizona, driverless car technology has again come into question. Having been all the craze over the past few years, this technology has had its staunch proponents and fierce critics. Tesla’s “Autopilot” project has been a strong example of this developing technology being rolled out on a more widespread level. However, as Tesla continues to develop its driver assist technology, new entrants in the market including Waymo (Google subsidiary) and even an old giant, Ford are right on Tesla’s heels. Recently however a new project has emerged, managed by a former legend of the hacking world, George Hotz. His project Comma.ai, is an entirely new take on the self-driving car. Instead of approaching the technology from a integrated software and hardware approach as Waymo and Tesla have done, Comma.ai focuses primarily on developing an open source software based system for currently operating ordinary vehicles. Unlike Tesla’s projects, Comma.ai is designed to adapt to existing vehicles in production, with the end goal of having a machine learning based software suite that would permit users to inexpensively use their current vehicle as an autonomous vehicle. Comma.ai’s method of reaching this goal is to allow anyone to have access to their source code, and permitting open source development. The software takes advantage of sensors, modules, and other already existing hardware components in vehicles to enhance safety and usability. The software is designed to be run on most smartphones, along with a special attachment.
This new approach will likely have a drastic effect on the market, as it would permit the development of self-driving car technology by a much wider developer base as well as those with less financial capital. If this open developer platform takes off, it is likely to further accelerate adoption of self-driving car technology through its impact on the passenger car market, hired transport market, and the logistics industry. The only question now is how will the market and Federal government react? There are likely three major markets that will be affected. The first would be the traditional passenger car market, specifically private drivers. The second would be hired transport such as taxis and buses. Third, would be the logistics and trucking industry. There are roughly 1.7 million drivers in the logistics industry and another 1.7 million in the taxi and bus industry. While numbers on the revenue generated by the taxi and bus industry in the US are murky, the American Trucking Association has found that for the trucking industry, the total yearly revenue was 676 billion dollars in 2016. In the taxi industry, which employs over 143,000 licensed drivers in New York City alone according to a 2016 report by the Taxi and Limousine Commission, pressure is already very high as a result of the introduction of ride-sharing services such as Uber and Lyft. Likewise widespread adoption of driverless technology, while still a distant future, will have drastic effects on this industry’s employment. Because of this, new proposals in Congress are already focused on creating new regulatory regimes for this technology. Proposals include preempting state regulations (of which vary from state to state), allowing for vehicles to be tested that don’t meet Federal safety standards, and testing caps. However, one of the advantages of the Comma.ai project is its ability to be adapted by many more users, which would likely mean regulators could do very little to prevent the spread of driverless cars. Their only choice will be to embrace it. Since the Comma.ai project is software based and is not tied to any particular car manufacturer, users are free to test the software on potentially any compatible vehicle. Because of this, it is preferable for the Federal government to take a more hands-off approach. By attempting to regulate and control the inevitable, Congress will inadvertently make it more difficult to safely adapt this technology for widespread use.
While there are many valid concerns over safety, studies and reports have demonstrated that even at their early alpha stages, self-driving vehicles are largely as safe, and if not more safe than humans. Some studies such as one conducted by the RAND Corporation even recommend adopting such technology as fast as possible, since it has the potential to drastically reduce traffic fatalities. In a predictive model where a 10% safer than human vehicle is compared to a 75% safer than human vehicle averaging development wait times, RAND Corporation’s Kalra et al. finds that “In the short term, more lives are cumulatively saved under a more permissive policy than stricter policies requiring greater safety advancements in nearly all conditions, and those savings can be significant — hundreds of thousands of lives”.
Regulating an early market by instituting testing caps may only lead to the prevalence of less than desirable systems in our current market, and may even prevent a potential safety enhancing network effect if more self-driving vehicles are permitted on the road. It remains to be seen what the future holds, and more importantly how this fast evolving technology will affect the valuation of both the passenger and commercial transport markets.
In an era of increasing globalization, what does one region’s internet regulations mean for foreign firms?
The past few years have been marked by many high profile disastrous data breaches in the US. There was the infamous Yahoo mail data breach of 2014, which resulted in a recent $35 million dollar fine imposed by the SEC, Equifax’s major 2017 data breach which resulted in hundreds of thousands of customers data being leaked, and; Facebook’s recent Cambridge Analytica misfortune. Data breaches like these have led to the protection of data and data privacy to become an important concern as our lives have become further digitized. What companies and governments should be able to do with data has been a major area of debate. It remains very unclear as to the direction both citizens, corporations, and policy makers would like to move forward. The infamous Senate testimony hearing of Mark Zuckerberg in the US Capital showed just how unprepared both companies and the US Federal government are when it comes to this issue.
In Europe, things are taking a completely different turn. A widely known piece of legislation for the technology sector is slowly being implemented. GDPR, standing for General Data Protection Regulation, was approved by the European Parliament in 2016. It is set to harmonize data protection law across the whole of the EU. Unlike an EU directive, which requires legislation to be implemented on a per-member basis, this regulation will take effect on a full union level. The aim of the GDPR is to delineate exactly how user data, specifically in the consumer context, is to be treated by firms. The GDPR grants many “digital rights”, which include; the right of access to data a company has collected, consent requirements, notification of data breaches, data portability (similar to the HIPAA healthcare law in the US), a protocol for data protection, and finally the controversial “right to be forgotten”. This is not an exhaustive list, as GDPR completely reshapes data protection law in the EU.
This raises the question, how will this regulation fit with US firms that operate in the EU? By far the most significant effect GDPR will have on US companies is the fact that the law will apply to all companies doing business in the EU. This means that companies like Facebook and Google must fulfill their obligations under the GDPR in order to operate in the EU. During the Congressional hearing on the Cambridge breach, Zuckerberg had remained unclear as to whether Facebook will implement GDPR policies only in its operations across the EU, or whether he will decide to have the protections extended to US citizens as well. It remains to be seen whether companies will find ways to fight the implementation, considering the increasing costs to comply and hire data protection officers, who would be responsible for implementing much of the process in house.
There are many reasons for companies to be very wary of new regulation like GDPR. For one, costs are enormous both for compliance as well as non-compliance. At the upper level, fines can be as much as 20 million Euros or 4% of each respective firm’s annual revenue. While implementation costs will vary from firm to firm, companies are expecting to spend millions on developing the proper processes and departments to meet the compliance deadlines. In certain industries these regulatory changes have met strong criticism, particularly in the online video game sector, where certain server and software arrangements remain incompatible with implementation. A popular massive online battle arena game (MOBA) “Super Monday Night Combat”, recently cited GDPR compliance as the primary reason for shutting down their servers.
For the US market in general, many firms have responded by saying that they will either reduce their presence in Europe (32%) or completely leave (26%) according to a survey conducted by PwC in 2016. Because of this, it remains important for policy makers to remain careful as to how they approach this sensitive issue. In the US, similar plans to implement a national “internet” tax on sales by companies such as Amazon resulted in much controversy about the future of an internet that is open and free for commerce. By making abrupt changes to the way internet companies are taxed or regulated, policymakers risk fueling increased uncertainty in an age where cyber security fears have already jostled the market. The internet has always remained a marketplace of innovation, free for commerce, with its low barrier to entry and ease of use. We can only hope it remains that way.
In the United States, college is expensive. Every year, students across the nation trek to their respective state capitals in order to lobby for increased financial aid. Among these are NYU students who have been knocking at the state departments doors in Albany. New York State has largely lived up to its promise of increasing assistance, and in recent years has allocated funds to make college much more affordable. Just last year, Governor Cuomo unveiled the Tuition-Free Degree Program, where families would be eligible to attend a New York State college tuition free. While the governor’s plan comes with many stipulations, such as income, residency, and credit requirements, it has still been hailed as a small but important step towards solving the student debt crisis. But is it a pragmatic solution?
During the 2016 Presidential race, several candidates referenced student loans and debt to be a critical issue, and for many millennials, it has become one of the most important policy issues on their agenda. The US student debt balances are now around 1.3 trillion dollars, roughly the same size as the US junk bond market and the default rate for these loans is ever increasing.
Although Governor Cuomo’s tuition relief program unsurprisingly has been received with great cheer among his constituents, there are those who voice concerns about throwing money at the problem as a permanent solution. While the goal on both sides of the political aisle is to help students obtain higher education at reduced cost, there have been a flush of bills proposed on both a Federal as well as a State level that seek to address the issue in alternative ways. The cost of higher education has been spiraling out of control over the past decades, with no end in sight, and debt-riddled students have been left floundering in the face of ever-increasing interest rates on student loans.
The long prevailing view has been that both Federal and State governments have done too little to reduce both the costs of higher education attendance, as well as help students pay off debt. Numerous proposals have been discussed, such as price ceilings on interest rates, or partial forgiveness of student loans, and some proposals even go as far as suggesting free public education for all students. With the cost of college attendance rising for universities all across the US. It remains to be seen what sort of new legislation will be proposed, if any action will be taken at all by the Federal government. One thing is certain, and that is drastic changes must occur or the default rate may continue to skyrocket as students accumulate more debt. According to the National Center for Education Statistics, since data was collected in the mid 1980s, college tuition has risen steadily past the rate of tuition in all categories of institutions, regardless of program length. For reference, across all institutions the inflation adjusted cost for the average tuition in 1984 was $10,210 compared to $21,728 in 2014. There is no indication that we will expect to see a slowing down or reduction in the average tuition rate.
Currently, nationwide student debt has been estimated to be roughly over 1.3 trillion dollars according to the New York Federal Reserve’s fourth quarter 2016 report, a 31 billion dollar increase since last fiscal quarter. According to The Student Loan Report, the average 2016 graduate in the US owed $17,126 in student loans. Generally, these numbers correlate with the percentage of bachelor’s degree holders in each state. The more people have already gone to college, the more future graduates will likely pay to attend a degree granting institution.
However, these numbers don’t give us a clear picture, as they also include those who did not take out loans. We also need to look at the average debt across the US per borrower, which is $35,051 according to Wall Street Journal’s Mark Kantrowitz’s analysis of National Center for Education Statistics data (NCES). This shows that in actuality, the situation is much more dangerous then it first seems. Obviously, there is nothing inherently wrong with debt and loans, as long as they are to be paid back. Unfortunately, this has not been the case. According to Kantrowitz, evidence indicates that this debt is continuing to rise, along with ever increasing default rates. According to a study by Judith Scott-Clayton at the Brookings Institute, the default rates for students who took out Federal loans in 2004 is projected to reach 40% by 2024. This is due to the pressure of college graduates finding it difficult to obtain jobs in the market that can adequately service their debt.
These numbers do not even consider those who have gone on to attend graduate school programs, where high costs can also require loans, although many of these programs offer significant discounts. Student debt can be a serious problem, as it can dramatically limit the options graduates have in choosing a career, purchasing a car or home, as well as many other economic activities. Surveys indicate that student loans have a ripple effect on the market and will likely lead to dramatic changes in sectors such as real estate, where traditionally purchasing a home was a commonplace middle-class goal.
There is however, mounting evidence that in fact it may be the student loans and Federal assistance themselves that may be the main contributing factor to the rising student debt. Daniel Lin, professor of Economics at American University argues that the issue comes down to simple supply and demand. As more students go to college, colleges must decide between allowing for increased enrollment or increasing tuition. It comes to no surprise that the latter is what occurs. A major effect of the Federal loans and assistance for students is increased college attendance and increased demand for education overall, leading to higher tuition costs. In a 2015 report published by the Federal Reserve Bank of New York, it was concluded that the three primary Federal student assistance programs; Federal Direct Subsidized Loans, Federal Direct Unsubsidized Loans, and Pell Grants all led to oversaturation of college students in the market, leading to increased attendance costs for students. This was further confirmed by a 2016 National Bureau of Economic Research study which found that the Federal Student Aid programs contributed to a 78% increase in the cost of education at schools that were part of these programs.
By ending these programs, college attendance would drop, as more people would find it difficult to obtain a loan for college. This in turn may lead to lower tuitions at universities. This is by no means a proposal that should be taken lightly. Most evidence suggests that college graduates on average have higher incomes than those who only graduated high school. A 2014 Pew Research Center study found that on average college graduates make $17,500 more than high school graduates. This is not even mentioning the ethical ramifications of denying college education to those without the financial means to attend.
This does not give us the whole story however. The type and quality of education can also make a significant difference. Certainly, we must compare factors such as majors, where it is more likely for someone who let’s say studies economics, to make more over his or her lifetime, than someone who studies certain other majors. Since the the Federal government’s loan programs do not consider which major the student chooses to study, we run into a situation where the loan is not provided in accordance with the demand in majors in the market.
Some majors may have a higher income potential, but also have a higher unemployment rate, and vice versa. Other important factors to consider is the average debt to income ratio for each major, which also drastically varies. An analysis by Credible looked into the debt to income ratios for each major, and found rates varying from 6% for economics majors to up to 15% for veterinary science majors in the market.
Another interesting question is whether college education is even necessary at all. George Mason University Economist Bryan Caplan has argued that college education and public high school education serves the public very little. Caplan argues that this education surmounts to little more than what he calls “signaling”, where graduation is an indicator that a candidate is worthy of employment, but the actual education itself does not necessarily produce skillful candidates to the market. For Caplan, higher education is a form of hoop jumping where prospective candidates show of their willingness to conform to societal expectations, work hard, among other things. In his recent book The Case Against Education, Caplan argues for abandoning the higher public education system altogether, in favor of a more liberal approach to education. Caplan looks into data on the occupations many students take after public education, as well as data on skill development and concludes that publicly funded education provides very little in actual value for labor in the market. Caplan cites various diverse sources of data for his argument. Especially interesting is the data Caplan cites regarding the retention of material taught in educational institution, and their low retention rates over time. Caplan however, does not argue that higher education does not lead to higher salaries, but rather that it does not lead automatically to more tangible work skills. What such a dramatic policy change would do is quite unclear, and it may be the case that current institutions are too ingrained in the market that their removal or reduction would lead to many sudden negative consequences. In the end, there is likely no right answer, but it may be time to reevaluate current Federal policy.
Recently, the European Union has begun the rollout of its new regulatory package for the financial industry of its member states, MIFID II (Markets in Financial Instruments Directive II). This package is a follow up to the original MIFID I that began the harmonization of financial regulation across the European Economic Area back in 2004. The new regulatory package is intended to promote investor confidence along with increased transparency in the European financial market, with aspects borrowed from the landmark US 2010 Dodd-Frank regulatory package. As we will further explore, much of this regulatory package is aimed at what are known as Dark Pools, private trading forums that are not present on traditional public exchanges such as the New York Stock Exchange or NASDAQ.
First to put things into perspective, MIFID II is a general regulatory package that impacts all financial services firms in the European Union. According to an analysis by Bloomberg analyst Dick Schumacher, MIFID II will encompass a wide range of change in how financial regulation is conducted in the EU. Schumacher writes; “among the changes that MIFID will make over the following years will include, the publication of prices and trades, limits on trading in private exchanges, as well as requiring brokers to secure best prices” (A sort of extension of the fiduciary rule), just to name a few.
These dramatic changes are almost certain to make a big impact on the way European financial markets function daily. The question then arises as to whether this new regulation serves the general good. Proponents and ESMA (European Securities and Markets Authority) claim that it is crucial to ensuring transparency and stability in the European financial market, with a clear indication of this legislation being particularly targeted towards dark pool trading. Dark pools, private trading forums for financial instruments where the price and size of orders are not revealed, have been a subject of much heated debate over whether they pose a threat to the health of the global finance market at large.
Dark pools are used by investors who want to fill large block orders (large security purchases) where they may arrange a much more preferable price than they would otherwise receive on the open market. Large orders filled on the open market are more subject to price swings which may negatively impact the price of the securities. To give an analogy, lets say you went to the flea market to purchase furniture, you come across a seller who wishes to sell you a chair. If you purchase the chair in front of other buyers, other buyers will expect to receive similar prices when they purchase the same product from the seller. However, if you were to quietly negotiate with the seller in the corner and decide on a different price, other buyers may never know that the seller sold you the chair for much less than market value. Therefore you can recieve a more favorable price and the seller is able to obtain a guaranteed sale, where he can continue to sell to other buyers at a higher market price. Critics of dark pool trading claim that dark pools pose a serious hazard to investors as there is increased possibility of predatory trading practices by high frequency trading firms. According to Michael Lewis, author of the bestseller Flash Boys: A Wall Street Revolt, dark pools allow for a wide variety of predatory trading tactics, such as pinging. Investopedia’s Elvis Picardo explains pinging very well. Picardo writes; A high-frequency trading firm puts out small orders so as to detect large hidden orders in dark pools. Once such an order is detected, the firm will front-run it, making profits at the expense of the pool participant. Here’s an example: a high-frequency trading firm places bids and offers in small lots (like 100 shares) for a large number of listed stocks; if an order for stock XYZ gets executed (i.e., someone buys it in the dark pool), this alerts the high-frequency trading firm to the presence of a potentially large institutional order for stock XYZ. The high-frequency trading firm would then scoop up all available shares of XYZ in the market, hoping to sell them back to the institution that is a buyer of these shares”.
Defenders of dark pool trading highlight the fact that dark pools are subject to the same strict regulatory scrutiny as standard exchanges, and that they may be a preferable choice for more sophisticated brokers who manage increasingly diverse portfolios. Additionally, long term criticisms have arisen from investors, and even the UK’s Treasury, who claim such regulation has a negative impact on bond liquidity, however this concern is yet to be explored more thoroughly. Dark pool trading over the past decade has substantially increased, especially in Europe according to data from several brokerages, with shares in European equity markets hovering near 10% since 2010.
The MIFID II regulations will undoubtedly impact the dark pool trading platform. However, it remains to be seen whether investors will increase trading on open exchanges as was the intention of the regulations, or if new infrastructure will evolve in the market that circumvent the barriers erected by MIFID II. Developments are already underway to find ways around the new regulation, including the use of what firms call systematic internalisers (SIs), in which firms serve in-house clients against their own book, rather than against other firms, essentially running their own private exchanges, which is currently a method exempt from most of MIFID IIs regulations. Alex Gerko of XTX Markets Inc. told Bloomberg that in fact MIFID II will only further push firms further toward exploring other forms of dark pool trading such as SIs, forecasting a dark pool market that would rise to even 30% of the market according to Bloomberg’s Will Hadfield. If this becomes the case, MIFID II will only be a temporary roadblock to the work of dark pool traders, and nothing more than another project for compliance departments.
The FCC is deciding whether or not to save the internet with regulations, but does the internet really need saving?
Over the past few years various activists and companies have pushed the FCC to adopt net neutrality regulation. The FCC or Federal Communication Commission, is the US administrative agency that deals with the regulation of telecommunications. Established in 1934, the FCC regulates radio, phone, television, and other communications. The initiative that the activists are seeking to pursue, net neutrality, is the principle that all internet traffic should be treated equally by ISPs (Internet Service Providers). Months ago there has was much discussion regarding the FCC and Net neutrality which has only recently died down, however this debate is likely to be renewed as a vote is expected to take place next month.
The debate goes something like this. Net neutrality advocates in the US argue that the FCC should regulate ISPs in order to establish the fair flow of traffic. They argue that the unfair treatment of traffic, say by an ISP such as Comcast, could negatively affect the openness of the internet. Critics of net neutrality regulation such as current FCC chair Ajit Pai, argue that imposing net neutrality regulations would hamper economic activity on the internet. The primary issue of concern is whether ISPs should be regulated under Title II of the 1934 Communications Act as “Common Carriers”. What does it mean to be regulated as a “common carrier”?
Under the law, when a telecommunications company is listed as a common carrier, it must abide by a whole host of anti-monopoly rules. These anti-monopoly rules include regulations on price and content. Proponents of net neutrality laws claim that companies such as AT&T and; Verizon, among many others, have been violating the rights of consumers by favoring certain traffic over others, and thus desire more stringent regulation. However the FCC under the current administration is reluctant to enforce such measures. But how valid are these concerns? And more importantly what would be the consequences of enacting strict Title II regulations?
First to put it in perspective, the internet is arguably the most unregulated institution and system currently in existence today. No single government or company centrally controls the internet. Until 2015, there has been no comprehensive regulation regarding net neutrality in the United States. Net neutrality regulation proponents argue that companies have taken advantage of this fact, and have unfairly treated network traffic. In one famous instance in 2007, Comcast was cited for “throttling” (slowing down) traffic. In another instance a company was found to have blocked Vonage VOIP phone calls. Instances like these have led proponents of regulation to argue for increased FCC supervision over the internet. While instances like these are examples of companies acting unfairly, they are not the norm and for the most part represent outliers.
Arguably a greater threat to net neutrality is government itself. The most prominent and notorious would be China’s extensive surveillance and censorship of the internet, as well as large state ownership of the two primary ISP’s in the country; China Telecom and China Unicom serving an incredible 20% of the world’s internet users.
While China may be considered much more restrictive than most countries, other European states have also acted in similar ways. The UK government has worked to implement age restricted content and has issued court orders to numerous sites publishing “wrongful” content. The UK does this through the Internet Watch Foundation, classified as a charity. Spain has recently been cited to have blocked pro-Catalan independence web addresses. These violations of internet freedom have been well documented by the NGO Freedom House.
Examples of such restrictions are much more commonplace than those of web traffic discrimination by private ISPs for monetary gain. The primary reason ISPs generally do not act in such way is because it would generally not be in their interest to do so. ISPs generally advertise their service in terms of price and speed, unlike those of bundled television packages. They choose not to play favorites since users demand a service that does not do so. It is possible that down the road they may do so, however it may not be necessarily wrong. Allowing internet companies to experiment with packages and favor certain forms of traffic is not in and of itself wrongful, and it should not be construed as such.
In some ways, by allowing companies to remain less regulated, it may allow them to provide a better service. If customers are willing to pay a lower price in order to have their internet favor certain services such as Netflix or Amazon, there can be no harm done by an ISP choosing to provide a package that caters to such market.
Additionally, ISP’s need the flexibility to manage traffic according to their needs to compensate for increasing usage of the same infrastructure. In the US alone, surveys conducted by the Pew Research Center show that individuals with internet access went up from 52% in 2000 to a whopping 88% in 2016.
In models constructed by Peitz at al. at the University of Mannheim, it was predicted that strict net neutrality laws may lead to severe web traffic inefficiencies. Using game theory Peitz models showed that a strict net neutrality legal system may force ISPs to employ more inefficient traffic management methods. Peitz postulates that according to his proposition “net neutrality generates an inflation of traffic, leading to excessive congestion of the network”.
The possibility of unintended consequences may continue even further. If Title II regulations were to be applied to ISPs in their entirety, it may be possible that free ISPs or “freemium” providers may be outright prohibited from the market. By their very nature new free ISPs such as Google Fiber’s project in Kansas favor certain traffic, such as traffic that goes to Google’s servers. Would it be right to consider this a discriminatory or monopolistic tactic? A great example has been Microsoft, which provided Internet Explorer free with its Windows operating system. Microsoft’s decision was considered a monopolistic discriminatory tactic according to the DC district court in 2001 in United States v. Microsoft 253.Fd 34. Equally so, Title II regulations prohibit monopolistic tactics and mandate a whole host of substantial regulation. The list of Title II regulations that go along with common carriers is too expansive to cover in this article, therefore I have listed a only a few examples.
The FCC lists on their website a whole host of rules that apply to Common Carriers. Here are just a few examples:
Under section 64.2325, the FCC is instructed to enforce regulations governing reasonable rates, terms, and conditions. So far, these rules have only applied to telephone companies, but what would define a reasonable rate for an ISP to charge? What would be considered a reasonable term or condition?
Under section 64.2007, the FCC requires approval for using customer proprietary network information. Telephone companies for years have had to keep records of customer information, and are only permitted to furnish this information after receiving customer approval. What would be the record requirements ISPs would have to hold and furnish for their customers?
Under Part 61 of the Subchapter regarding common carriers, there would be new rules regarding the collection of tariffs, under Part 59, there are rules regarding how infrastructure sharing must occur. These rules could potentially be disastrous if applied to smaller ISP’s whom would have difficulty assessing how to go about collecting taxes or developing their infrastructure in order to compete with larger ISPs.
It may be argued that it would be unlikely that the FCC would enforce such provisions, however court cases in the past have always given deference to administrative agencies to have the widest possible delegation. In Mistretta v. United States (1989) the Supreme Court ruled that so as long as congress has given an agency or commission an “intelligible principle”, it is free to carry out an open variety of means to achieve that objective.
Another example of such broad delegation would be the landmark Chevron v. National Recourse Defense Council (1984) decision, in which the Supreme Court ruled that an administrative agency such as the EPA (Environmental Protection Agency) is permitted to carry out its interpretation so as long as Congress left the statute ambiguous and when such interpretation is reasonable.
With the sections of regulation listed before, the ambiguity in the law is quite clear, and it would be impossible to determine what direction the FCC will take in applying such rules. The fact is that the Communications Act was created in the 1930s using broad terminology, and with a Supreme Court that has been ever more open to administrative deference, the FCC is free to implement any number of rules that apply to common carriers. Ultimately, the FCC under Title II has a wide range of authority.
While acknowledging the pressing concerns that net neutrality advocates have in regards to protecting a free and open internet, the adoption of Title II regulations and the labeling of ISPs as Common Carriers may result in drastic consequences down the road. In a field with limited research, it may be preferable to avoid major changes in a largely stable and free US ISP market. The FCC may down the road work to address serious net neutrality violations by carriers using new and specifically addressed legislation, but in the meantime the adoption of a common carrier status for ISPs may create more holes than it may fill.
You seem them everywhere, the sad rusted carcasses of bicycles with wheels, handlebars, and all sorts of parts missing. Ever wonder why this is so common? I have thought so myself.
When we analyze bicycle theft throughout the United States, we run into a major problem: statistics are very shoddy. According to the National Crime Victimization Survey (NCVS), in 2006 roughly 250,000 bicycle were reported stolen, however, the NCVS estimates that during the same time, 1.3 million bicycles were reported stolen (United States Department of Justice, 2006). This problem has been growing in the past decade, as the rate of bicycle theft has increased by roughly 4.2% from 2010 to 2011, per the last available FBI larceny report (Federal Bureau of Investigation, 2010-2011). Even though nationwide violent crime rates are dropping, bicycle theft is a growing trend. This is a major problem considering the increasing popularity of bicycle usage. Those who regularly do not to ride bicycles insisted that bicycle theft is the primary reason they do not ride according to a survey conducted in Montreal (van Lierop, 2015).
The bicycles targeted by thieves are typically those that the average person would use, while bicycles costing $500 and below constitute 76% of all bicycles stolen. (van Lierop, 2015)
With this, people have often thought of different solutions. One of the most popular proposals is increasing police presence, surveillance, as well as the sentences for this category of theft. Economists Giovanni Mastrobuoni and David Rivers postulate that criminals react in the same way to increasing opportunity costs for crimes when prison sentences are increased (Giovanni Mastrobuoni, 2016) . This, however, is very debatable with other studies and reports showing a weak or lack of correlation between increasing sentences and the prevalence of a crime. A study from the National Research Council examined the lengthy history of crime in the US starting from when statistics were gathered in the late 1920s; it was unable to conclude that higher rates of incarceration and increased sentences reduced crime (Jeremy Travis, 2014). This also leads to concerns regarding recidivism, or committing a crime again after prison, which has its own unique effects. The principle of opportunity cost also applies to recidivism, as Nobel prize winning economist Gary Becker writes, “Serving time in jail may reduce legal opportunities so that the opportunity cost for future criminal activity is lower” (Becker, Gary, 1968)
The US is also the country with the largest prison population in the world with 1.53 million prisoners in both State and Federal Prison. According to the Bureau of Justice, increasing this already burdened system may not be a very smart choice (United States Bureau of Justice Statistics, 2016). On top of that, there are growing concerns regarding how fairly sentences are served, particularly when we factor in minority issues (Board, 2016). Another question is whether criminals are even rational in the first place, and this is a serious question considering the randomness of bicycle components being stolen. In order to better understand this, there would need to be an analysis of the availability and price of such components on the black market, so one could determine if there is a rational economic incentive to steal a bicycle bell or tire.
Now the circumstances may seem grim, but there is an alternative: private sector innovation. This would allow for new ways to secure bicycles. One of the more obvious approaches is increasing the strength of locks. Companies like Kryptonite and Abus have designed newer and more reinforced locks that are capable of withstanding serious attacks. The new Kryptonite Foughteaboutit touts a “Hardened MAX-PERFORMANCE Steel shackle” (Kryptonite, 2017). That doesn’t sound like some simple cable that could be cut with your average wire clippers. Another way the private sector has helped in preventing bicycle larceny is by focusing on the areas bicycles are stored. Many businesses encourage their employees and customers to bring their bikes inside and use security cameras to reduce bicycle theft. Other more advanced solutions include GPS tracking devices like Sherlock, which works in a similar way as the Find My iPhone application (Sherlock, 2017). Each solution has its detriments; current power tools are very capable of breaking high-tech locks and indoor bicycle storage uses up valuable real estate, especially given rising rent costs and expensive GPS technology, which opens up a myriad of privacy concerns. Whatever the solution, avoidance of imprisoning and criminalizing people is always an optimal solution.
On another front, bicycles are a major industry with 6.2 billion dollars in direct sales alone in the US, according to the National Bicycle Dealers Association 2015 industry overview (National Bicycle Dealers Association, 2015). New environmental and health concerns have made many Americans choose bicycles as an alternative form of transportation, with numerous cities adopting bike share and bike lane programs.
In New York City alone ridership increased from roughly 170,000 daily rides in 2005 to the 450,000 daily rides as of date (Hu, 2017).
The future of the transportation market may be completely changed by finding a new solution to this problem of theft. The most interesting trend to watch is not in transportation, rather how market forces react to criminal activity and whether they are capable of addressing systemic property violations or not.