Category: Insights

Machine Learning to Help Optimize Traffic and Reduce Pollution

Applying artificial intelligence to self-driving cars to smooth traffic, reduce fuel consumption, and improve air quality predictions may sound like the stuff of science fiction, but researchers at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have launched two research projects to do just that.

In collaboration with UC Berkeley, Berkeley Lab scientists are using deep reinforcement learning, a computational tool for training controllers, to make transportation more sustainable. One project uses deep reinforcement learning to train autonomous vehicles to drive in ways to simultaneously improve traffic flow and reduce energy consumption. A second uses deep learning algorithms to analyze satellite images combined with traffic information from cell phones and data already being collected by environmental sensors to improve air quality predictions.

“Thirty percent of energy use in the U.S. is to transport people and goods, and this energy consumption contributes to air pollution, including approximately half of all nitrogen oxide emissions, a precursor to particular matter and ozone – and black carbon (soot) emissions,” said Tom Kirchstetter, director of Berkeley Lab’s Energy Analysis and Environmental Impacts Division, an adjunct professor at UC Berkeley, and a member of the research team.

“Applying machine learning technologies to transportation and the environment is a new frontier that could pay significant dividends – for energy as well as for human health.”

Traffic smoothing with Flow

The traffic-smoothing project, dubbed CIRCLES, or Congestion Impact Reduction via CAV-in-the-loop Lagrangian Energy Smoothing, is led by Berkeley Lab researcher Alexandre Bayen, who is also is a professor of electrical engineering and computer science at UC Berkeley and director of UC Berkeley’s Institute of Transportation Studies. CIRCLES is based on a software framework called Flow, developed by Bayen’s team of students and post-doctoral researchers.

Flow is a first-of-its-kind software framework allowing researchers to discover and benchmark schemes for optimizing traffic. Using a state-of-the-art open-source microsimulator, Flow can simulate hundreds of thousands of vehicles – some driven by humans, others autonomous – driving in custom traffic scenarios.

“The potential for cities is enormous,” said Bayen. “Experiments have shown that the energy savings with just a small percentage of vehicles on the road being autonomous can be huge. And we can improve it even further with our algorithms.”

Flow was launched in 2017 and released to the public in September, and the benchmarks are being released this month. With funding from the Laboratory Directed Research and Development program, Bayen and his team will use Flow to design, test, and deploy the first connected and autonomous vehicle (CAV)-enabled system to actively reduce stop-and-go phantom traffic jams on freeways.

How reinforcement learning can reduce congestion

Some of the current research into using autonomous vehicles to smooth traffic was inspired by a simple experiment done by Japanese researchers 10 years ago in which about 20 human drivers were instructed to drive in a ring at 20 mph. At first everyone is proceeding smoothly, but within 30 seconds, the traffic waves start and cars come to a standstill.

“You have stop-and-go oscillation within less than a minute,” Bayen said. “This experiment led to hundreds if not thousands of research papers to try to explain what is happening.”

A team of researchers led by Dan Work of Vanderbilt University repeated the same experiment last year but made one change: they added a single autonomous vehicle in the ring. As soon as the automation is turned on, the oscillations are immediately smoothed out.

Why? “The automation essentially understands to not accelerate and catch up with the previous person – which would amplify the instability – but rather to behave as a flow pacifier, essentially smoothing down by restraining traffic so that it doesn’t amplify the instability,” Bayen said.

Deep reinforcement learning has been used to train computers to play chess and to teach a robot how to run an obstacle course. It trains by “taking observations of the system, and then iteratively trying out a bunch of actions, seeing if they’re good or bad, and then picking out which actions it should prioritize,” said Eugene Vinitsky, a graduate student working with Bayen and one of Flow’s developers.

In the case of traffic, Flow trains vehicles to check what the cars directly in front of and behind them are doing. “It tries out different things – it can accelerate, decelerate, or change lanes, for example,” Vinitsky explained. “You give it a reward signal, like, was traffic stopped or flowing smoothly, and it tries to correlate what it was doing to the state of the traffic.”

With the CIRCLES project, Bayen and his team plan to first run simulations to confirm that significant energy savings result from using the algorithms in autonomous vehicles. Next they will run a field test of the algorithm with human drivers responding to real-time commands.

DeepAir

The pollution project, named DeepAir (Deep Learning and Satellite Imaginary to Estimate Air Quality Impact at Scale), is led by Berkeley Lab researcher Marta Gonzalez, who is also a professor in UC Berkeley’s City & Regional Planning Department. In past research, she has used cell phone data to study how people move around cities and to recommend electric vehicle charging schemes to save energy and costs.

For this project, she will take advantage of the power of deep learning algorithms to analyze satellite images combined with traffic information from cell phones and data already being collected by environmental monitoring stations.

“The novelty here is that while the environmental models, which show the interaction of pollutants with weather – such as wind speed, pressure, precipitation, and temperature – have been developed for years, there’s a missing piece,” Gonzalez said. “In order to be reliable, those models need to have good inventories of what’s entering the environment, such as emissions from vehicles and power plants.

“We bring novel data sources such as mobile phones, integrated with satellite images. In order to process and interpret all this information, we use machine learning models applied to computer vision. The integration of information technologies to better understand complex natural system interactions at large scale is the innovative piece of DeepAir.”

The researchers anticipate that the resulting analysis will allow them to gain insights into the sources and distribution of pollutants, and ultimately allow for the design of more efficient and more timely interventions. For example, the Bay Area has “Spare the Air” days, in which traffic restrictions are voluntary, and other cities have schemes to restrict traffic or industry.

While the idea of using algorithms to control cars and traffic may sound incredible at the moment, Bayen believes technology is headed in that direction. “I do believe that within 10 years the things we’re coming up with here, like flow smoothing, will be standard practice, because there will be more automated vehicles on the road,” he said.

4 human-caused biases we need to fix for machine learning

Bias is an overloaded word. It has multiple meanings, from mathematics to sewing to machine learning, and as a result it’s easily misinterpreted.

When people say an AI model is biased, they usually mean that the model is performing badly. But ironically, poor model performance is often caused by various kinds of actual bias in the data or algorithm.

Machine learning algorithms do precisely what they are taught to do and are only as good as their mathematical construction and the data they are trained on. Algorithms that are biased will end up doing things that reflect that bias.

To the extent that we humans build algorithms and train them, human-sourced bias will inevitably creep into AI models. Fortunately, bias, in every sense of the word as it relates to machine learning, is well understood. It can be detected and it can be mitigated — but we need to be on our toes.

There are four distinct types of machine learning bias that we need to be aware of and guard against.

1. Sample bias

Sample bias is a problem with training data. It occurs when the data used to train your model does not accurately represent the environment that the model will operate in. There is virtually no situation where an algorithm can be trained on the entire universe of data it could interact with.

But there’s a science to choosing a subset of that universe that is both large enough and representative enough to mitigate sample bias. This science is well understood by social scientists, but not all data scientists are trained in sampling techniques.

We can use an obvious but illustrative example involving autonomous vehicles. If your goal is to train an algorithm to autonomously operate cars during the day and night, but train it only on daytime data, you’ve introduced sample bias into your model. Training the algorithm on both daytime and nighttime data would eliminate this source of sample bias.

2. Prejudice bias

Prejudice bias is a result of training data that is influenced by cultural or other stereotypes. For instance, imagine a computer vision algorithm that is being trained to understand people at work. The algorithm is exposed to thousands of training data images, many of which show men writing code and women in the kitchen.

The algorithm is likely to learn that coders are men and homemakers are women. This is prejudice bias, because women obviously can code and men can cook. The issue here is that training data decisions consciously or unconsciously reflected social stereotypes. This could have been avoided by ignoring the statistical relationship between gender and occupation and exposing the algorithm to a more even-handed distribution of examples.

Decisions like these obviously require a sensitivity to stereotypes and prejudice. It’s up to humans to anticipate the behavior the model is supposed to express. Mathematics can’t overcome prejudice.

And the humans who label and annotate training data may have to be trained to avoid introducing their own societal prejudices or stereotypes into the training data.

3. Measurement bias

Systematic value distortion happens when there’s an issue with the device used to observe or measure. This kind of bias tends to skew the data in a particular direction. As an example, shooting training data images with a camera with a chromatic filter would identically distort the color in every image.  The algorithm would be trained on image data that systematically failed to represent the environment it will operate in.

This kind of bias can’t be avoided simply by collecting more data. It’s best avoided by having multiple measuring devices, and humans who are trained to compare the output of these devices.

4. Algorithm bias

This final type of bias has nothing to do with data. In fact, this type of bias is a reminder that “bias” is overloaded. In machine learning, bias is a mathematical property of an algorithm. The counterpart to bias in this context is variance.

Models with high variance can easily fit into training data and welcome complexity but are sensitive to noise. On the other hand, models with high bias are more rigid, less sensitive to variations in data and noise, and prone to missing complexities. Importantly, data scientists are trained to arrive at an appropriate balance between these two properties.

Data scientists who understand all four types of AI bias will produce better models and better training data. AI algorithms are built by humans; training data is assembled, cleaned, labeled and annotated by humans. Data scientists need to be acutely aware of these biases and how to avoid them through a consistent, iterative approach, continuously testing the model, and by bringing in well-trained humans to assist.

AT&T Extends Broadband Connectivity Across Rural Alabama

AT&T Inc. T recently expanded its Fixed Wireless Internet service across the rural and underserved locations in Alabama. With this endeavor, the company has rolled out broadband connectivity in 18 states across the United States.

Through Fixed Wireless Internet service, AT&T will offer an Internet connection with download speed of at least 10Mbps and upload speed of at least 1Mbps. The connectivity is facilitated through a wireless tower and is routed via a fixed antenna placed on the customer’s home. This cost-effective Internet connection is arguably one of the best methods to deliver high-quality faster broadband to customers in underserved rural areas.

Since 2015, AT&T has invested about $1.2 billion to improve the wireline and wireless networks in Alabama. The company will use fiber optics to connect the wireless towers with its dedicated global network to deliver the Fixed Wireless Internet service. AT&T has an extensive coverage of fiber optics in Alabama with more than 2 million strand miles of fiber optics spanning the state.

With the inclusion of Alabama, AT&T presently offers this service to more than 440,000 locations across 18 states and further aims to provide Internet access to above 1.1 million locations by the end of 2020. In addition, the company is gearing up to launch the first standards-based 5G services to consumers in multiple U.S. markets by the end of 2018. AT&T has been working hard since 2017 to lay the foundation for mobile 5G network and has completed network upgradation in 23 major cities.

AT&T’s wireless growth opportunities from the launch of standards-based mobile 5G services and the FirstNet project remain impressive. Notably, completion of the 3rd Generation Partnership Project’s first implementable 5G new radio (NR) specification has set the stage for the global mobile industry to start full-scale development of 5G NR for large-scale trials and commercial deployments in 2019.

Despite the positives, AT&T has recorded an average decline of 5.5% in the past six months while the industry has rallied 6.7%. It remains to be seen how the recent initiatives by the company help its share price performance in the long run.

USDA Invests $600 Million In Rural Broadband, But Farmers Still Struggle To Connect

Late last month, the U.S. Department of Agriculture announced plans to add $600 million to fund e-Connectivity, a pilot program aimed at bridging the rural digital divide by improving broadband internet access for American farmers. But the rural digital divide is wider than ever, as farmers struggle to run tech-dependent businesses without broadband.

According to a 2016 Federal Communications Commission (FCC) report, 39% of rural Americans don’t have broadband internet access but Daiquiri Ryan, a policy fellow at the non-profit Public Knowledge, says that number is almost certainly inaccurate.

“All of that data that the FCC collects…is self-reported by internet providers and it’s only done by the census block, [which] means if one person on the census block is served by that provider…the entire census block is considered served.” But in very rural, sparsely populated areas, says Ryan, that one house with service might be the only one with actual service for miles, so when the entire area of the map shows up as served, it’s not an accurate picture.

Worse yet, though members of Congress often complain about the lack of good quality data during rural broadband hearings, Ryan says no one seems to want to fix the problem. “[T]hey’re not doing much to fund better data collection, [which is] the first hump.”

In today’s farming operations, almost all aspects of production are enhanced by technology. “We have technology in our processing [operations], our feed mill systems” as well as trucks and tractors, explains Shawn Tiffany, owner and operator of Tiffany Cattle, a cattle feedlot located in Herington, Kansas. Tiffany happens to be a strong proponent of traditional farming techniques like cover cropping, but his cattle farm also relies on computers for tasks like calculating the feed for the cows and for tracking customer data.

Consumers may assume technology is only used on industrial-size farms, but that’s not true. Virtually all farmers today rely on technology in their day-to-day work, including those who run smaller, more boutique operations. Brian Fiscalini is one of the owners of Fiscalini Cheese Company, a farmstead cheesemaking and farming operation located in Modesto, California, and he recently spoke at the International Food Information Council Food Innovation Summit in Washington, D.C., to say, amongst other things, yes, even artisanal cheese makers rely on technology.

Jodi DeHate, who grew up on a dairy farm and today works with a number of dairy producers in rural Michigan, says today’s dairy farms are sophisticated operations, using sensors to detect everything from butterfat content to whether a cow might be sick. Even though she has the experience to tell when a cow isn’t feeling well, the technology provides more precise data, which helps guide farmers in making difficult decisions about their cows’ lives and welfare.

On the farm, spotty or limited internet coverage can be a costly problem. “Whether you’re on Wall Street or out here in the middle of Kansas, data equals dollars,” says Tiffany. Tiffany also sits on the board of directors for local rural telecom provider Tri-County Telecom Association or TCT, whose coverage relies on a fiber optic network. Tiffany’s original feedlot is in Herington, so computers at that location run on that fiber optic network, but his newer feedyard falls outside of TCT’s coverage area. As a result, he says, the second feedlot’s wireless system just isn’t as good. In the past year, he’s made frequent calls to a technician to try and get the system working, but it continues to be an ongoing problem.

In Michigan, DeHate says it’s a fact of rural farming life now that virtually all of the local equipment dealers have to have employees on staff dedicated solely to tech support. And farmers often find themselves having to get creative, MacGyver-ing their way to tech solutions by taking “the flash drive from their from their tractors…and [downloading]…information [onto] the computer in the house or vice versa.”

Coverage over the vast but sparsely populated areas in which DeHate lives and works is a constant challenge, especially since her work covers four different rural counties. “Once you get past the little villages or towns, [it’s] either wireless…or you’re stuck with satellite or [using] your phone as a hotspot.” DeHate has heard from farmers using the Wi-Fi at their local McDonald’s just to get their work done.

Lack of service doesn’t just impact the farm’s business, but the farmer’s family, too. Kids might be able to check out iPads or computers from school, but without good wireless service, the computers won’t work. DeHate says it’s especially worrisome if you can’t get service and you’re out working in the field, because “if you do have an accident, you [have to] hope you can actually walk somewhere or…make sure somebody can get to you.”

From a policy perspective, Public Knowledge’s Ryan says there are multiple roadblocks to getting rural Americans the coverage they need: “A lot of carriers don’t want to build out to these areas, because it’s very expensive, and it doesn’t turn a profit for them.” Worse than that, says Ryan, in many places the existing copper is rotting and there’s a backlog of repair requests. “[People call and are told] we only have five technicians for the entire state, so it’s going to take a couple of weeks.” Ryan says the FCC has rolled back a number of regulations and now “there’s really no complaint process.”

Tiffany, who sits on the board of a rural telecom provider and is familiar with its business challenges, is sympathetic to the position rural carriers find themselves in as it’s tough to make a financial case for some of these services when the return on investment just isn’t there. That’s why the government funding was put in place to begin with, but Tiffany is worried about the new farm bill funding provisions, saying they seem to favor wireless technology rather than an investment in the necessary infrastructure. “I’m not going to say we’ll never get fiber but it’s going to make it very very difficult to put [new] fiber in [at this point].”

Ryan says the USDA has at least moved closer to implementation with new funding to its e-Connectivity program, and one good feature is that almost any entity can apply, including telecom providers and municipalities. But it’s also true that investment in underlying infrastructure is still lacking. According to Ryan, “[the] trend right now, especially with the the Senate majority and the House majority in the GOP is to say things like satellite service is the future. 5G is the future, [or] wireless internet [is the future but]…we know [that] in real life…a wireless connection [isn’t] the same quality of service as you get over a fixed line.”

Rural Maine communities taking lack of broadband into their own hands

They’re tired of waiting for the private sector – and state and federal governments – to bring them up to speed.

Philadelphia residents Wayne and Katy Kach were hoping to move to the Prospect area in rural Maine to escape their cramped surroundings and be closer to family members, but it isn’t going to happen in the foreseeable future.

Like many urban professionals whose desire for more bucolic surroundings could benefit rural Maine communities, the couple’s relocation plan was a nonstarter because both require reliable high-speed internet access to do their jobs. The area where they want to live doesn’t offer it.

“We were hoping to find a place that would be able to afford us a little bit of land,” Wayne Kach said. “We found a few properties in the Prospect area, but it just kind of got shut down because there’s just no way for us to work.”

Many rural communities in Maine have been waiting decades for the major internet service providers to bring broadband service to their areas, a situation exacerbated by the state having the second slowest internet speeds in the country. The lack of broadband is a deterrent to would-be residents and businesses, and it thwarts local efforts at economic development. It also deprives existing residents of opportunities for entertainment, education, employment and digital health services.

Proponents of broadband expansion in Maine say rural areas have been left behind because internet service providers don’t see a financial benefit to upgrading their rural networks, the state lacks strong leadership to push comprehensive broadband initiatives, and many rural residents still don’t understand why broadband service is important to their communities.

The issue is finally gaining some political traction at the statewide level. All four of Maine’s gubernatorial candidates have said expanding rural broadband service is a top priority, although their proposed solutions differ.

Maine consistently places at or near the bottom of national rankings for internet connectivity, and the primary reason is the state’s poorly connected rural areas.

Roughly 15 percent of Maine residents still don’t have access to broadband service as defined by the federal standard of at least 25 megabits per second download and 3 Mbps upload, said Peggy Schaffer, who runs the Maine Broadband Coalition, an informal federation of public policy professionals, educational institutions, businesses, nonprofit organizations and private individuals seeking to improve broadband access in the state. At 25 Mbps, a single user can engage in moderate internet usage such as streaming high-definition video, or multiple users can engage in light usage such as streaming music or browsing the internet.

Most of the roughly 200,000 Mainers without broadband access live in rural communities. At least 20,000 of them have no internet access at all.

The crisis is real. It affects out-of-work residents who can’t search for jobs online, home-based business owners who can’t connect digitally with customers, students who can’t complete homework assignments from home, and seniors who can’t rely on potentially lifesaving online health services.

But a growing number of rural towns are no longer satisfied with waiting for the private sector to bring them up to speed. Despite inadequate help from the state and federal governments, a handful of communities in Maine are working on taxpayer-subsidized broadband infrastructure projects that could serve as a model for the rest of the state. Another 50 or so towns are trying to drum up public support for broadband projects of their own.

One innovative project in the St. Croix Valley would create Maine’s first publicly owned broadband network with providers competing for customers at gigabit speed, and others would provide a similar level of service through public-private partnerships.

“There’s a lot of talk about ‘We need broadband,’ but the issue is how we do it, and that is on many levels,” Schaffer said. “One of them is money, and another one is structure: What does the structure look like for us doing broadband? And there’s a lot of different opinions about that.”

SAME OLD STORY: ‘BE HAPPY WITH WHAT YOU HAVE’

Rural Maine resident David Reed said the internet service in his community is so slow and unreliable that he and his wife are thinking about leaving town.

“I envy people who can get a wired service at all, even if it’s ‘just’ DSL,” Reed said. “I live on the coast, south of Bangor outside Belfast in Swanville, and I can’t get anything at all, not even DSL. We are looking to sell the house and move, if the right opportunity presents itself, because of lack of broadband.”

Reed said his current internet service, a fixed wireless service provided by UniTel, offers a maximum download speed of about 8 Mbps – when weather conditions are ideal. The rest of the time, download speeds range from 1 to 4 Mbps, he said.

Reed, who works in information technology, said not having reliable internet service has made it more difficult for him to do his job. He said the situation has been even more problematic for his wife, who had tried to start a home-based business involving online education and training but ultimately had to abandon the idea.

“It just didn’t work – she would spend all day trying to get through some of the training, and maybe get an hour’s worth out of a full eight- or 10-hour day,” he said. “Eventually she ended up going back to work for somebody else.”

Population density is a big factor in determining whether the big internet service providers such as Spectrum, Xfinity and Consolidated Communications will invest in upgrading a community’s communications infrastructure to accommodate wired broadband service, said Julie Jordan, director of Downeast Economic Development.

Just because a community is wired for telephone and cable TV service does not mean it has the required infrastructure for broadband internet service. Adding broadband requires a significant investment in additional equipment to boost the signal.

In most of rural Maine, the big providers simply don’t think there is any financial upside to providing broadband service, Jordan said, and they are not legally required to do so.

Jordan’s organization serves the St. Croix Valley area, which includes Calais, Baileyville and surrounding communities. She said the average residential internet download speed in her community is about 3 Mbps, and the maximum is about 10 Mbps.

Jordan said one of the problems for rural communities is that it is difficult to convince private companies to upgrade their infrastructure when the area already has internet service, even though the existing service isn’t very good. There are multiple internet providers in the St. Croix Valley, she said, but all of them provide service that is slow and unreliable.

“But because we did have service, we were overlooked as far as getting any kind of grant money or encouraging providers that were here to offer better broadband,” Jordan said. “You kind of get the same old story: ‘Be happy with what you have.’ ”

GETTING BUY-IN

Ellsworth resident and small-business owner Emily Shaffer said she loves her rural community and has no intention of leaving, but Shaffer said the lack of broadband service has made it more difficult for her to conduct business.

Shaffer, who designs and sells custom jewelry, said it is a slow and tedious process to do certain things such as uploading images of new designs to her website.

“It’s a pain, but it doesn’t stop me from doing business,” she said. “It’s just time and aggravation with things that I would otherwise be able to just click and go.”

The reluctance of big internet providers isn’t the only obstacle preventing rural Maine from joining the modern communications age, said Carla Dickstein, senior vice president of research and policy development at Brunswick-based community development financial institution Coastal Enterprises Inc., one of the founders of the Maine Broadband Coalition. Some small communities in Maine don’t even push for broadband because they don’t see the value in it, she said.

“Part of the problem is that the communities aren’t seeing what the future is,” Dickstein said. “I don’t think everyone is seeing why it (broadband) is so important, and if you don’t see that, then it’s hard to get your town mobilized and to make it a priority.”

In order to receive broadband service, small municipalities need to put “skin in the game” by helping to fund local infrastructure projects through bond sales and other means, even if it’s a small amount to begin with, she said.

Funding is a challenge, Dickstein said, but there is actually quite a bit of financial assistance available through state and federal grants that many rural communities aren’t even trying to tap into.

“I think the challenge is the lack of understanding of the future even more so than the lack of money,” she said.

In some communities, getting broadband service will require a team of local leaders who are willing to go door to door and sell individual residents on the value of broadband service and why they should agree to spend some of their tax dollars on infrastructure, Dickstein said.

Another obstacle is a lack of strong leadership at the statewide level to promote rural broadband expansion, said Nick Battista, policy officer at the Island Institute, an economic and community development organization based in Rockland that has been assisting small island and coastal communities with local broadband projects.

“(State broadband authority) ConnectME is doing what they can with a limited amount of funds, but we need to be investing closer to $50 million to $100 million a year on rural broadband, rather than $1 million a year, in public funds to solve this problem,” Battista said. “Communities are recognizing that it’s an economic and social imperative – they need it for health care, for telemedicine, for education. You have kids who are going to the library after the library closes and sitting outside in the parking lot just to do their homework. That’s not how we build a strong state.”

BUILDING BROADBAND ON THEIR OWN

There is hope on the horizon, but significant progress is going to require buy-in from Maine’s rural residents, said Fletcher Kittredge, CEO of GWI in Biddeford, one of the Maine-based internet service providers that have been working with rural communities on local broadband projects.

The number of rural towns where a majority of residents favor investing in broadband infrastructure is increasing, he said, and there are already a handful of municipal broadband projects in various stages of completion.

One example is Islesboro, an island community where the residents voted overwhelmingly in 2016 to invest in a $3.8 million broadband network that connects via an underwater fiber-optic cable to the statewide Three Ring Binder fiber-optic network.

Kittredge said the project took some convincing at first, but now more than 80 percent of Islesboro residents use the service, operated by GWI, which offers a 1 gigabit download speed for just $35 a month.

“It was an incredibly interesting project and we knew we were going to sink a lot of time into it … because it was the first one,” he said. “We’re incredibly pleased with the way it turned out because it could be a model for (success). I think it’s made an enormous difference for that community.”

The project required Islesboro to issue bonds to pay for the network’s construction, and then it contracted with GWI to build, own and operate the service. It’s a model that GWI and other small internet service providers are trying to replicate in other parts of the state.

“Right now, we’re looking at doing the towns of New Sharon, Dixfield and Blue Hill, and we’re desperately trying to go around getting people in those towns to affirmatively say, ‘If you build this, we would be interested,’ ” Kittredge said. “Because that’s a condition of applying for subsidized federal loans.”

An even more ambitious project is the Downeast Broadband Utility, a regional effort in the St. Croix Valley that, if successful, would create Maine’s first independent, publicly owned broadband network.

In partnership with Calais and Baileyville, Downeast Economic Development is working on a project to connect about 3,000 area households with gigabit fiber-optic internet service, said Jordan, the economic development group’s director.

Unlike any other broadband network in Maine, the Downeast Broadband infrastructure would be owned by the community, and internet service providers would pay to lease bandwidth on the network. A handful of providers have expressed interest, she said, but so far none have made binding commitments.

If successful, the rural community would have the only fiber-optic broadband network in Maine with multiple providers competing for business. If it fails, the community will have spent up to $3.1 million of taxpayer money building a network that no provider wants to use. It’s a risk area residents were willing to take.

“We’re rural, and we want to stay competitive and vibrant and become part of the rest of the world,” Jordan said.

How To Build A Connected City

Getting to be a Smart City first requires getting connected. A Connected City is a city or community that has the network infrastructure (fiber optics, wi fi, small cell, towers) that allows for the efficient exchange and collection of information (voice, data, video) via a variety of devices both public and private (sensors, cameras, phones, traffic signals).

Street furniture such as light poles, bus shelters, kiosks, waste containers and other street level infrastructure is ideal for deployment of next generation (5G) telecommunications gear. The goal is to get the antennas closer to the user.

Smart Cities encourage the growth of telecom infrastructure by creating both a policy and investment environment that facilitates growth. There are a limited number of stakeholders in each city and community that have developed networks.

Stakeholders that usually control the majority of network assets in a city or community include: Utilities, Cableco’s, Cellular Carriers, CLEC’s and Telephone Companies, Transportation Authorities and Public Safety Departments.

The city generally plays 3 roles in developing a Connected City since much of the investment will be borne by the Private Sector.

  • Regulatory and Policy Administration
  • Consumer of Network Technology
  • Owner and Provider of Network Technology and Street Furniture

Many Cities struggle to ensure broadband access to all consumers. Digital Inclusion initiatives involve everything from multi-stakeholder PPP’s to the National Internet Essentials program provided by Comcast. Reaching rural communities continues to be difficult.

The City’s Role in Developing Network Infrastructure

Telecommunications is a core pillar of Smart City Infrastructure and requires an ecosystem of public private cooperation to maximize its impact. Cities that actively engage in creating a Connectivity friendly environment and develop a Broadband Strategy deliver significant benefits to its citizens. Aligning with the utilities that often own most of the pole infrastructure is a key step.

It is no longer necessary to convince Cities that Connectivity is vital. The challenge is in funding and executing. Network construction is capital intensive and requires an operating budget each year. Maintaining networks requires specialized equipment and skill sets most cities are not budgeted to support. The collaborative approach has proven to be successful. Policy models exist that have been driving investment and growth.

The City of San Jose, California recently provided this look at their Broadband Strategy for a Smart Cities Council event in Silicon Valley. They looked at how to ensure broadband connectivity was ubiquitous and affordable. Their model projects the benefits of collaboration between the public and private sectors.

broadband strategy

San Jose started to put their plan into action in early 2018. As part of the plan, they took an aggressive stance with the 2 largest wireless carriers, AT&T and Verizon. They understood the importance to the carriers of timely deployments on City or Utility owned infrastructure.

San Jose successfully negotiated customized agreements with each carrier that ensures dense network technology would be deployed in San Jose to service the main business districts and residential areas. They also were successful at getting the carriers to create a set aside fund to enable the city to provide network services to underserved markets of the city to create digital equity.

The City looked at its pole infrastructure and determined there were going to be 3 primary uses. The first use was as a traditional streetlight that would be upgraded to LED to meet sustainability goals. Second, the poles are critical to the broadband infrastructure needed to build out 4G and 5G. Third, poles are now being used to mount more and more things such as cameras, sensors and advertising. Connecting these things is called the Internet of Things.

First, the increased broadband collaboration lead to new capability. San Jose developed a Demonstration Partnership Policy, which was established to support its Economic Development Strategy and city operations.

Under this policy, a SmartPole pilot project is being deployed. San Jose is working with Philips Lighting and local utility, PG&E to deploy 50 SmartPoles, with energy-efficient, wireless-controlled LED lighting. The city estimates that the LEDs will provide the city with a 50-percent energy savings compared to conventional streetlights.

Second, The SmartPole also offers built-in 4G LTE small cells. This enhanced coverage serves both the citizens of San Jose as well as provides capacity for IoT applications that the city may choose to implement. Philips also collaborated with PG&E to design a two-way communicating meter that sits on top of the SmartPole, rather than requiring the typical pedestal meter on the ground, thereby reducing street clutter.

broadband streetlight

Other cities have been collaborating with Utilities and Pole Manufacturers. The City and County of Denver recently released this guide to deal with small cells and pole attachments.

They collaborated with utility provider, Xcel, Verizon, Aero Wireless and Jacobs Engineering to ensure a comprehensive look at both existing poles and new deployments. The results included a new design that will first accommodate 1 small cell carrier but is being expanded to include 2 carrier’s gear. Denver County Engineer, Jon Reynolds who led the effort commented, “At first we were talking different languages but once we established common goals, we were able to develop a few solutions that we are all pleased with.”

Jim Lockwood, President of Aero Wireless said, “Developing a new pole design that accommodates small cells was a priority for us all. We are seeing demand in most major markets now.”

small cell deployment

Boston also took a collaborative approach and asked for submission of different designs. After consulting with all stakeholders, the City has begun to approve certain designs and make them available to all parties interested in deploying new poles. Mike Lynch, Director of Broadband for the City of Boston said, “There are a lot of new regulations being implemented and the demand for Small Cell deployments is rising fast. We got ahead of the design process and are approving most permits well under the timeline guidelines.”

cooperative design

Smart City Data Projects Need Transparency and Oversight

Experts at the MetroLab Network Annual Summit warned about the need for control of data-heavy public safety projects, while emphasizing the positive side of community engagement.

NEWARK, N.J. — The massive amounts of data collected by cities, and the analytics it enables, are often trumpeted as forces to grow the collective good, whether that is to make traffic move more smoothly or improve air quality.
With improper oversight and policy direction, however, that data can also lead to unjust policing or uncontrolled surveillance of communities, say researchers and policymakers who have studied the various types of smart cities technologies being deployed in municipalities across the country.
“As we look at the next few years, the big challenge, in my mind, is there’s no formal public oversight over technology in our cities,” said Ryan Gerety, a technology fellow at the Ford Foundation, speaking Oct. 15 at the MetroLab Network Annual Summit at Newark’s New Jersey Institute of Technology. “Cities, themselves, recognize this and are looking for mechanisms to correct that.”
Agencies with some of the biggest budgets and consequently amassing the most technology tend to be in areas of public safety, said Gerety, and that’s where oversight is often thinnest.
“We have many people in the room who are extremely expert at building systems to change, in very positive ways, communities, and people who are choosing to work with those city agencies who want to do that in the best way possible,” said Gerety during the panel discussion “Are Smart Cities Utopian or Dystopian?”
“The flip side of that is in places where you have much more regressive, say, police departments who want to do something different, who will go ahead and do it on their own, and where we don’t have a civil society that is informed in order to push back against illegal or inappropriate measures,” she added. “And so we need to have civil rights organizations, and social justice organizations, at the local level that have the technical capacity to fight back and evaluate these programs, using the best know-how we’ve learned to do it right, and we need legal, formal accountability.”
Three years ago Chicago rolled out its Array of Things project, an enterprise-scale, sensor-driven Internet of Things platform that collects data about the people, places and air quality in Chicago. The network gathers information related to the patterns of people as they move through the city, not data related to individuals.
Researchers wanted to design an infrastructure for research into intelligent infrastructure: street signals, real-time communications between infrastructure and vehicles, which require edge computing, as well as sensors to measure flooding. The data is updated every 30 seconds, and is free and open. The application programming interface (API) is updated every five minutes.
With 80 percent of Chicagoans living about two kilometers from one of the 100 sensor and camera pods, it was immediately obvious how a citywide IoT network like this one could raise concerns around privacy and using the data to further questionable activities like unjust policing or surveillance.
Advocacy groups, along with city and University of Chicago officials, came together to define a privacy policy that exceeded the state requirements in Illinois, and took that draft to communities for resident feedback, said Brenna Berman, the executive director of the City Tech Collaborative at U+I Labs in Chicago, who worked on the rollout of the Array of Things.
“We did not get a lot of pushback about those cameras, mainly because this was a community-based project, not a public safety-based project,” she said during the panel discussion. Residents wanted more access to the images in an effort to be more engaged in the policing of their neighborhoods.
“So the pushback wasn’t, ‘Hey, we don’t want surveillance.’ [It was,] ‘We want to have participation in that surveillance so that we can help to serve our advocacy and engagement within our community,’” Berman recalled.
One of the biggest lessons learned was, “listen to the residents that are going to be engaged in the project, because you can’t guess what they are going to think or need unless you actually ask,” said Berman.
The next evolution of the Array of Things project is to install more of the sensor pods for greater detailed readings, said Charlie Catlett, a senior computer scientist with the Argonne National Laboratory at the University of Chicago.
“Our goal is to make 100 percent of the people who live in Chicago have one of these within 2 kilometers of where they live,” said Catlett, during one of the MetroLab Summit sessions. “And we think we can push the 1 kilometer up to at least 70 to 80 percent of the population, at which point an air quality measurement or a noise measurement starts to mean something if it’s a kilometer away instead of 6 kilometers.”
The community IoT network project in Chicago — which brought together community and social justice advocacy groups — underscored how to move forward with sophisticated smart city projects dedicated to collecting and analyzing large amounts of data to bring about improvements to urban life, according to Berman.
“When you’re defining the project and the policy around it — whatever that might be — involve those advocacy groups, whether that’s the ACLU [American Civil Liberties Union], or a specific community representation group, so that they are part of the definition of the project,” she said. “You may not be able to implement everything they suggest. You may not be able to address every concern that they have. But essentially, having the advocacy voice in the tent to help you define the program can go a long way in defining a program that will more holistically understand what the perspective of the overall ecosystem is going to be.”

Smart Cities are Getting Smarter, But Challenges Remain

Ubiquitous sensors and applications are driving rapid growth for smart cities, but machine learning not yet advanced to cope with capacity demands.

Ubiquitous sensors in mobile robots, aerial drones, and autonomous vehicles, plus connections to municipal infrastructure through the Internet of Things, promise more efficient delivery of utilities and reduced traffic, among other things. While the variety of sensors and applications for smart cities has grown rapidly in recent years, a lot of work remains, especially in the areas of machine learning to analyze and interpret the data from these sensors, experts and observers said.

“[There is] still a long way to go to achieve a smart city,” said Mateja Kovacic, a visiting research fellow at the Urban Institute at the University of Sheffield, and a postdoctoral research fellow a the Nissan Institute of Japanese Studies, University of Oxford. Kovacic points to a number of notable examples in the smart-cities-in-progress space, including Barcelona, Spain, and Dubai.

In Barcelona, municipal authorities have installed smart solar trash cans, and free Wi-Fi routed via street lighting, as well as “sensors that monitor air quality and parking spaces,” Kovacic said.

In Dubai, which established a blockchain strategy aimed at creating the world’s first blockchain-powered government, the country also has an autonomous transportation strategy that seeks to make 25% of all transportation in the city autonomous by 2030.

“There are also efforts to make policing, security, governance, healthcare and public services more autonomous through artificial intelligence, which I see as an extension and expansion of the ‘smart’ paradigm,” Kovacic said.

Autonomous robotics in the next age

Joshua Meler, senior director of marketing at Hangar Technology, said he believes the world is entering an age in which autonomous robotics “will transform how companies operate, industries evolve, and economic opportunities are uncovered.”

Hangar develops a platform that combines drone hardware, software, and data analytics to enable autonomous drones to collect and interpret visual data. Meler said that up until now, the platform has been employed mainly for business uses – via drones that automate the end-to-end “aerial insight” supply chain. But he said it has also been rolled out for smart city infrastructure applications in controlled environments, and at a limited capacity.

“The next stage for Hangar is an era where computers augment visual insights, automating observations and alerting humans of areas that require attention,” Meler said. “This includes counting traffic, identifying cracks on bridges, recognizing inventory on construction sites and more — without human intervention required. We’re not there today, but as technology advances and as regulations facilitate autonomous operations, the Hangar platform will be capable of facilitating many of the applications of smart cities.”

Partnerships drive applications in Finland

The city of Tampere, Finland, is in the process of establishing several innovative and digital smart city solutions through cooperation between companies, organizations, municipalities, and citizens. Pirkko Laitinen, communications manager for Smart Tampere, said the aim is to “create better services for the citizens, and serve as a partner, a platform, and a reference for the companies on their way to the international markets.”

She said the strategic economic program approaches this in two ways. From the inside, the program is “taking the city’s own services to the digital age through agile testing.” On the outside, the program helps businesses “create new business models and smart city solutions through ecosystem building and platform creating.”

Pirkko Laitinen Smart Tampere smart cities article

Pirkko Laitinen, Smart Tampere

The program focuses on seven smart city themes that Laitenen said are strong in Tampere:

  • Mobility
  • Health
  • Industry
  • Governance and citizens
  • Research and education
  • Buildings and infrastructure
  • Connectivity

“One robot-based new business model we have created with companies is the SmartMile delivery service points, which are in shared use among all parcel delivery service providers,” Laitinen said. “[This] means that online store customers can receive all their orders in one place. The robotics inside the machine is done by Konecranes.”

Machine learning needs to increase capacity

With increasing demands placed upon machine learning by smart city applications, Kovacic said she believes it will soon be advanced enough to cope with those demands.

The main challenge, she said, is that current machine learning “does not yet have the capacity to handle the quantity of data, and is not autonomous enough to analyze data without human intervention.” Another challenge is integrating different physical and virtual technologies necessary to make a smart city genuinely smart.

“The existing challenges can be overcome by further work on machine learning technology and nurturing a mindset with a holistic, integrative approach,” Kovacic said. “But there is no leapfrogging here, it simply takes time.”

Smart Tampere automobile smart cities

Automobiles with sensors will provide data to smart cities. Source: Smart Tampere

“Another step toward overcoming existing challenges is being more aware that the physical technology, like robots, is an integral aspect of machine learning and vice versa,” she added. “There is no place for mind-body dualisms – or virtual-physical – there needs to be an attempt at integration. Lastly, cybersecurity and data privacy and protection are among the main issues and will need to be dealt with utmost care and consideration for individual and social rights and needs.”

Meanwhile, Laitinen pointed out that machines can currently learn simple tasks and that the technology is developing as the algorithms get better.

“As a city, we are still learning about what would be the best way to gather data from multiple different areas into one pool,” Laitenen said, “and how to analyze it in order to offer it for the companies to use.”

The skies will get smarter before the ground

Hangar’s Meler said he believes key sensor innovation and smart city trends “will happen in the sky before they happen on the ground.” The path to an autonomous world “must first rise up, in a largely uninhabited space free of children chasing soccer balls across the light, running groups beating the crosswalk light, or distracted drivers listening to the radio and texting a friend,” Meler said.

“The fact is, completely autonomous drones are years away, while cars and robotics will take at least a decade before they prove safe at scale,” he added. “For this reason, I think we’ll see meaningful innovation [in the air] first. Drone hardware will enable heavier payloads and longer flight times. Sensors will get smaller, better and cheaper. Governments and industries will lift regulations and restrictions. And this Solow’s Paradox we’re experiencing with digitization will hit a tipping point, and we’ll enter a new age of productivity.”

Kovacic said she envisions the “full integration of vehicles, drones, and robot-mounted sensors with the city through IoT,” particularly since “the quantity of data a city can produce exceeds human capacity, and needs a sophisticated network of everything.”

“Swarm technology is very promising and can be applied in vehicles, drones and different robots to produce collective action and decision-making,” Kovacic said. “Another key innovation may be decentralization. Unlike the old smart-city paradigm, where stationary sensors and cameras collect data and send it to a centralized system for analysis, the analysis and decision-making will become dispersed, decentralized and more efficient and instantaneous.”

“[A] smart city will no longer be a static accumulator of data but will become extended through mobile technology with capability to interact with each other and instantaneously make decisions based on this interaction without human intervention,” she added.

Even so, Kovacic said she suspects such developments will take more than a few years, and envisions a proliferation of various robots, such as drones for e-commerce, shared autonomous vehicles, service and retail robots, and city maintenance swarm robots. She also said she expects to see more machine learning-enhanced services across a smart city, from governance to the service industry.

“In California, there is currently underway a pilot project where autonomous vehicles pick up passengers, and delivery robots deliver groceries and food,” Kovacic said. “These are just two examples of what we can expect from future smart city applications – but only when these technologies are also connected and interact with the city — which they are currently not — and when there is a feedback loop between them and the city – a truly cybernetic city.”

Human Brain-Sized Artificial Intelligence (AI): Coming Soon To A Cloud Data Center Near You

Data center-hosted artificial intelligence is rapidly proliferating in both government and commercial markets, and while it’s an exciting time for AI, only a narrow set of applications is being addressed, primarily limited to neural networks based on convolutional approach. Other categories of AI include general AI, symbolic AI and bio-AI, and all three require different processing demands and run distinctly different algorithms. Virtually all of today’s commercial AI systems run neural network applications. But much more control-intensive and powerful AI workloads using symbolic AI, bio-AI and general AI algorithms are ill-suited to GPU/TPU architectures.

Today, commercial and governmental entities that need AI solutions are using workarounds to achieve more compute power for their neural net applications, and chief among them is specialty processors like Google TPUs and NVIDIA GPUs, provisioned in data centers specifically for AI workloads.

However, using TPUs and GPUs, even if they are dedicated to AI processing tasks, can still be problematic. It drives up data center capital expenditures for AI-specific processors, and it drives up costs for software development (e.g., GPUs are notoriously difficult to program). In most hyperscale data centers today, there exists a combination of standard CPUs for normal data center workloads and specialty TPUs or GPUs (comprising approximately 5-10% of server rack space) dedicated to AI/neural net processing.

CPUs are easy to program but become slow and power-hungry when tasked with highly parallel AI applications. Specialty AI processors are faster and more power efficient than CPUs for neural net applications, but they are difficult to program.

Today, if embarrassingly parallel computation is the goal (i.e., executing each instruction mindlessly on a large number of data sets), such as in convolutional neural networks, TPUs/GPUs are a go-to solution. They are more efficient (and in the case of TPUs, they can be up to 30x faster) than CPUs for convolutional neural net processing. This is because the action of fetching and scheduling an instruction uses significantly more power than actually executing that instruction on a single data set. A specialty AI processor, such as a GPU, will fetch a single instruction and execute that instruction on 32 datasets simultaneously (maximizing throughput and minimizing power).

Google recently announced its third-generation TPU, which is still nowhere near the performance needed for real-time human brain simulation projects. And general AI, bio-AI and symbolic AI algorithms are not a good match for GPU/TPU processors.

The human brain needs to process huge amounts of information in order to take action in real time, and this requires massive processing power. Today’s supercomputers don’t even come close to the processing power of the human brain (which is approximately 1019 floating point operations per second). One of the fastest supercomputers on the planet today, China’s Sunway TaihuLight, with 10,649,600 cores, can achieve 93 petaflops (Rmax on Linpack benchmark suite). That’s a tiny fraction of what we need for simulation of the human brain in real time, which requires approximately 1019 flops (that’s 10 exaflops, or 10,000 petaflops).

We have a long way to go, but we are getting there. In fact, I predict it will be about two years, give or take.

If you’re not yet familiar with ongoing efforts to build a super supercomputer, one capable of simulating a human brain, consider the Human Brain Project, which was established by the European Union in 2013 to unite the fields of neuroscience, medicine and computing for both commercial and research needs.

SpiNNaker (spiking neural network architecture), which is part of the Human Brain Project, is being led by professor Steve Furber (the inventor of the ARM processor and current member of Tachyum’s Board of Advisors) at Manchester University. SpiNNaker’s goal is to simulate the equivalent of a rat brain (about 1000x less than a human brain) in real time, using around 1 million ARM processors configured as a spiking neural network, which simulates neuronal activity more accurately and uses much less power than “embarrassingly parallel” neural nets. If your brain was a neural network, it would boil inside your skull.

Along with the examples described above, my company, Tachyum, is working on a breakthrough processor architecture called Prodigy. Prodigy architecture offloads heavy lifting tasks normally done in hardware to a Tachyum-proprietary smart compiler.

It’s only taken you about four minutes to read this article. During that time, people searched the web almost 14 million times, logged into Facebook 3.8 million times, tweeted 1.8 million times, watched more than 17 million YouTube videos, and swiped right or left on 4.4 million Tinder profiles.

When cloud-based data centers offer users AI applications at a reasonable cost, tasks like manually looking at Tinder profiles and then swiping will seem downright archaic. The new data and AI centers will know which profiles to flag for you, and they will know which YouTube videos you will want to watch. Sooner than you think, data centers will be the place to access low-cost AI solutions for everyone.

New NVIDIA Data Center Inference Platform to Fuel Next Wave of AI-Powered Services

Tesla T4 GPU and New TensorRT Software Enable Intelligent Voice, Video, Image and Recommendation Services

Fueling the growth of AI services worldwide, NVIDIA today launched an AI data center platform that delivers the industry’s most advanced inference acceleration for voice, video, image and recommendation services.

The NVIDIA TensorRT™ Hyperscale Inference Platform features NVIDIA® Tesla® T4 GPUs based on the company’s breakthrough NVIDIA Turing™ architecture and a comprehensive set of new inference software.

Delivering the fastest performance with lower latency for end-to-end applications, the platform enables hyperscale data centers to offer new services, such as enhanced natural language interactions and direct answers to search queries rather than a list of possible results.

“Our customers are racing toward a future where every product and service will be touched and improved by AI,” said Ian Buck, vice president and general manager of Accelerated Business at NVIDIA. “The NVIDIA TensorRT Hyperscale Platform has been built to bring this to reality — faster and more efficiently than had been previously thought possible.”

Every day, massive data centers process billions of voice queries, translations, images, videos, recommendations and social media interactions. Each of these applications requires a different type of neural network residing on the server where the processing takes place.

To optimize the data center for maximum throughput and server utilization, the NVIDIA TensorRT Hyperscale Platform includes both real-time inference software and Tesla T4 GPUs, which process queries up to 40x faster than CPUs alone.

NVIDIA estimates that the AI inference industry is poised to grow in the next five years into a $20 billion market.

The NVIDIA TensorRT Hyperscale Platform includes a comprehensive set of hardware and software offerings optimized for powerful, highly efficient inference. Key elements include:

NVIDIA Tesla T4 GPU – Featuring 320 Turing Tensor Cores and 2,560 CUDA® cores, this new GPU provides breakthrough performance with flexible, multi-precision capabilities, from FP32 to FP16 to INT8, as well as INT4. Packaged in an energy-efficient, 75-watt, small PCIe form factor that easily fits into most servers, it offers 65 teraflops of peak performance for FP16, 130 TOPS for INT8 and 260 TOPS for INT4.
NVIDIA TensorRT 5 – An inference optimizer and runtime engine, NVIDIA TensorRT 5 supports Turing Tensor Cores and expands the set of neural network optimizations for multi-precision workloads.

NVIDIA TensorRT inference server – This containerized microservice software enables applications to use AI models in data center production. Freely available from the NVIDIA GPU Cloud container registry, it maximizes data center throughput and GPU utilization, supports all popular AI models and frameworks, and integrates with Kubernetes and Docker.

Supported by Technology Leaders Worldwide

Support for NVIDIA’s new inference platform comes from leading consumer and business technology companies around the world.

“We are working hard at Microsoft to deliver the most innovative AI-powered services to our customers,” said Jordi Ribas, corporate vice president for Bing and AI Products at Microsoft. “Using NVIDIA GPUs in real-time inference workloads has improved Bing’s advanced search offerings, enabling us to reduce object detection latency for images. We look forward to working with NVIDIA’s next-generation inference hardware and software to expand the way people benefit from AI products and services.”

Chris Kleban, product manager at Google Cloud, said: “AI is becoming increasingly pervasive, and inference is a critical capability customers need to successfully deploy their AI models, so we’re excited to support NVIDIA’s Turing Tesla T4 GPUs on Google Cloud Platform soon.”

Scroll to top