How Music Motivates Athletes Olympian Lindsey Vonns RapEnhanced Workouts

first_img How Music Motivates Athletes: Olympian Lindsey Vonn’s Rap-Enhanced Workouts Twitter https://twitter.com/Genius/status/1059481726907547649 Facebook Here’s how music helps the world’s peak athletes reach a higher level of training and performancePhilip MerrillGRAMMYs Nov 5, 2018 – 4:47 pm One thing Olympic athletes such as skier Lindsey Vonn have in common with pro sports stars such as LeBron James and Floyd Mayweather Jr. is their reliance on musical motivation during intense training sessions. Music adds value to many things in life, but its connection to physical performance enhancement has made a wide audience curious whether motivational playlists can help them boost their own fitness.On Nov. 5 Genius details downhill Olympic skiing champion Lindsey Vonn’s personal picks — especially rap and hip-hop — meant to inspire athletes of all levels to push through the common love/hate relationship with training, centered on the song “You Can Make It If You Try.” Vonn’s favorite artists to train to encompass Beyoncé, Kodak Black, Drake, Far East Movement, Jay-Z, Jessie J, Kendrick Lamar, Lil Wayne, Missy Elliott, Nas, Notorious B.I.G., Rihanna, Justin Timberlake, and more — a list that almost pumps you up just reading it.Another 2018 skiing medalist, Cassie Sharpe described music’s invigorating effect, saying, “It really keeps me focused and keeps me in my own space.” This interest in staying in the zone and being in a flow state underlies the wider public interest, as athletes at every level seek higher performance and increased focus in their own lives.Vonn isn’t the first marquee athlete to reveal her playlist. Floyd Mayweather Jr. released his “Hard Work & Dedication” 42-song playlist in summer 2017 before his bout with Conor McGregor. On Apple Music this past Sept., new Laker LeBron James released a 62-song playlist titled “The Strongest.” Hip-hop was also heavily represented in both playlists, and James even took representation to a personally inspiring level by selecting all women.”I believe that African American women are some of the strongest people on earth,” James said. “I grew up around some amazingly strong women and am inspired by the strength I see around, including my mom, wife, and daughter.”Sports playlists are motivating scientific research as well. Last year, an academic paper appeared in Frontiers In Psychology titled “The Sound of Success: Investigating Cognitive and Behavioral Effects of Motivational Music in Sports,” exploring this topic more rigorously. The authors found a distinction between the way music does assist endurance and risk-taking and the way it did not improve the motor coordination of participants in their study. A control group with no music was compared with participant-selected playlists as well as an experimenter-selected playlist including Eminem, David Guetta, Katy Perry, and Kanye West. Comparing their own results with the academic literature on the subject, they considered this a rich subject for further future investigations.As sharing music becomes more of a no-brainer, this practice has become more digital and daily. The group effect among high school students, where friends and teammates listen to each other’s favorite artists, has been a staple of growing up in America. In addition to playlists’ impact on sports training, it seems another dimension being added to American life is answering the question, “What do your heroes listen to?”Steve Aoki Writes Music Soundtrack For Zumba Fitness ProgramRead more Olympian Lindsey Vonn’s Rap-Enhanced Workouts how-music-motivates-athletes-olympian-lindsey-vonns-rap-enhanced-workouts Email News last_img read more

The best cars under 40000 in 2019

first_imgEarlier this month, we detailed our picks of the best cars, trucks and SUVs available under $30,000. But with the average new car transaction price in the US sitting around $35,000, buyers have a lot more choices for just a bit more cash than that $30K mark.To that end, we’re raising the price cap for this new list: the best cars under $40,000. That extra 10 grand opens the door for entry-level luxury cars, midsize SUVs and even long-range electric cars. Here are some of our favorites.(Note: While all of the vehicles listed here have base MSRPs under $40,000, some of them offer fully loaded trim levels that can exceed this price cap.)  2020 Hyundai Palisade review: Posh enough to make Genesis jealous 48 Photos 2020 Kia Telluride review: Kia’s new SUV has big style and bigger value 2019 Toyota AvalonI tend to like small, sporty cars. The 2019 Toyota Avalon is neither of those things, so why am I recommending it? Because it’s a fabulously well-done large sedan that’s an epic long-distance cruiser. Its looks may not be for everyone, but there’s no denying its buttery-smooth powertrain and class-above cabin that are among its many strong points.A 3.5-liter gas V6 comes standard with 301 horsepower and 267 pound-feet of torque, offering plenty of power. An optional hybrid model with an electrically augmented 2.5-liter four-cylinder is available for $1,000 more, but unless gas prices spike enough to make its mid-40s fuel efficiency act as a salve for its power deficit (176 horsepower, 163 pound-feet), I’d recommend sticking with the standard engine.Pricing starts at $36,480 (including destination) for a well-equipped base XLE, with top-shelf Limited models ringing up at just under $42K before options. At that end of the spectrum, you’re looking at an Avalon sufficiently luxurious enough to make you forget all about this car’s costlier Lexus ES twin. Oh, and if that’s not enough to keep you in the Toyota showroom, know this: The Avalon has an infinitely less annoying infotainment interface, plus you can fold the rear seats down.– Chris PaukertClick here to read our 2019 Toyota Avalon review.Get your local price 75 Photos 70 Photos More From Roadshow 35 Photos 2019 Genesis G70The compact sport sedan segment has long been my jam, and the most compelling entry in that field is the 2019 Genesis G70 — it even won Roadshow’s Shift Award for Vehicle of the Year. Any way you slice it, the smallest Genesis (so far) is a solid consideration with a starting price of $35,895 (including $995 for destination).The compact, rear-wheel-drive Genesis comes standard with a 2.0-liter, turbocharged four-cylinder engine that makes 252 horsepower and 260 pound-feet of torque, connected to an eight-speed automatic or an optional six-speed manual transmission. All-wheel drive is optional. A 3.3-liter, twin-turbo V6 is available, but places the G70’s price north of $40,000. Besides, most Roadshow staffers prefer the 2.0T anyway.You still get heaps of tech and driver-assistance features with the base car, including an 8-inch touchscreen with Apple CarPlay and Android Auto, as well as automatic emergency braking with pedestrian detection, adaptive cruise control, lane-keep assist, blind-spot monitoring with rear cross-traffic alert and automatic high-beams. If you want that level of advanced driver assistance from BMW, Mercedes or Audi, you’ll have to pay closer to $50,000.– Manuel Carrillo IIIClick here to read our 2019 Genesis G70 review.Get your local price 2019 Toyota RAV4Toyota’s RAV4 has been one of the world’s best-selling small SUVs for a reason. It’s simple, stout, well-built and priced appropriately. Now, for 2019, it looks good, too.The drivetrain options on the 2019 RAV4 may not be the most exciting in the world, but they are efficient and shouldn’t give you many problems going forward. Optional hybrid power in a compact SUV is a great selling point, too.The new RAV4, particularly in Adventure trim, is a handsome SUV that begins to look a little like its more off-road capable siblings without forcing those vehicles’ compromises on its owner. It’s a great crossover, and is more appealing than ever before.– Kyle HyattClick here to read our 2019 Toyota RAV4 review.Get your local price 52 Photos 2019 Audi Q3 sets phasers to fun 56 Photos Comments 77 Photos 27 Photos 2020 BMW M340i review: A dash of M makes everything better 29 Photos 2019 Volvo V60: A stylish, comfortable hauler 2019 Genesis G70: Korea launches an assault on the BMW 3 Series 2020 Kia TellurideI’m going with a somewhat unusual choice for this roundup, because I’ve recently spent a fair bit of time in Kia’s new Telluride and I’ve been really, really impressed. It’s remarkably good. I think it looks remarkably good, too, though it has definitely split opinion. The ride is on the leisurely side of comfortable, but it really is refined, lulling my passengers to sleep on multiple occasions. Meanwhile, the 3.8-liter V6 provides better-than-adequate power and the eight-speed transmission is responsive and unobtrusive, which is really all you can ask for in an automatic in a rig like this.You can get in a front-wheel-drive Kia Telluride for $31,690, which is affordable given all it offers. Working within our $40,000 cap, I’d step up to the feature-packed EX trim, which starts at $37,090 and includes Kia’s comprehensive Highway Driving Assist system. Another $2,000 gets you AWD, then add on the $1,045 destination fee and you’re just $135 over the $40,000 mark for a big, comfortable SUV that’s as sophisticated to drive as it is to look at.– Tim StevensClick here to read our 2020 Kia Telluride review.Get your local price 2019 Volvo XC40Subcompact luxury crossovers are oftentimes hit-or-miss affairs. Some skimp on luxury and style, while others are duds behind the wheel. Neither is true about Volvo’s XC40. If I were shopping for a small, premium SUV, it’s absolutely the one I’d buy.The XC40 starts at $33,700, and I actually like its lowest Momentum trim the best (the same spec as Roadshow’s long-term XC40 test car). Opt for the more powerful T5 engine with all-wheel drive, choose a few option packages, and you’ve got a really nicely equipped crossover for right about $40,000. LED headlights, leather seats, a 9-inch touchscreen infotainment system and a ton of active safety equipment all come standard.The XC40 impresses with its easygoing, comfortable driving dynamics, spacious interior and high-quality materials. It’s everything I love about Volvo’s larger, more expensive vehicles, all in a pint-sized package.– Steven EwingClick here to read our 2019 Volvo XC40 review.Get your local price 2019 Acura RDX is a price-conscious luxury buyer’s delight 2019 Mercedes-Benz A-ClassThe A-Class is a great entry point into the Mercedes-Benz brand. This little sedan punches way above its weight with premium materials, a peppy turbocharged engine and plenty of technology.Mercedes’ new MBUX infotainment system comes on an optional 10.25-inch touchscreen, and brings natural voice recognition to the table. Plus, I love the augmented reality overlay that’s available on the navigation. It displays navigation directions directly on top of the real-time video display from the front camera, ensuring you’ll never miss a turn. The four-cylinder turbocharged engine puts out 188 horsepower and 221 pound-feet of torque, which is more than adequate in the A220 sedan. A Sport mode can dial up the transmission and throttle response, while Comfort is great for daily driving. Power goes down to the front wheels through a seven-speed, dual-clutch transmission, although all-wheel drive is available for those in colder climates.Overall, the new A-Class is a truly premium car — far more so than the last-generation CLA250 ever was.– Emme HallClick here to read our 2019 Mercedes-Benz A-Class review.Get your local price 2019 Hyundai Kona ElectricEarmarking $40K opens buyers to a new generation of entry-level EVs, and one of our favorites is Hyundai’s new Kona Electric. The subcompact SUV combines generous safety and cabin tech with reasonable spaciousness, all wrapped into a city-friendly footprint and wallet-friendly budget. It starts at $37,495, but with available electric vehicle incentives, even the feature-rich Limited trim can squeeze in under $40,000.The Kona’s electric motor sends 201 horsepower and 291 pound-feet of torque to its front wheels. That’s more get-up-and-go than the turbocharged gasoline Kona, but the heavier EV is a bit slower overall. Without gear changes or revs to build, however, the nearly silent electric SUV should feel more responsive off the line and around town. Of course, the most important number is the Kona Electric’s 258-mile EPA-estimated range — that should quell most range anxiety concerns. At a DC Fast Charger, the EV can boost its battery to an 80% charge in about an hour. More common, 240-volt, Level 2 home and public stations can juice the battery with a 9.5-hour charge.– Antuan GoodwinClick here to read our 2019 Hyundai Kona Electric review.Get your local price 2019 Volvo V60I didn’t only pick the V60 because wagons are such a great blend of car-like driving pleasure and SUV-like utility. Rather, it’s because the Volvo V60 is such a wonderfully well-designed and well-packaged machine for any type of driving. It’s the perfect everyday, everything vehicle.The V60 is beautifully designed, is packed with the latest and greatest active-safety tech (albeit sometimes as paid options) and it drives with poise and maturity. With either the base, thrifty 2.0-liter turbo engine or the optional, surprisingly powerful turbo- and supercharged T6 version, the Volvo offers a nice balance of workaday civility and easy power.The Volvo V60 does sneak in under our $40,000 cap, starting at $39,895 with destination for the T5 Momentum model. But I’ll concede that it’s very, very easy to blow the budget once you start adding more features or upgrading to all-wheel drive. Still, at any price the V60 represents one of the most well-rounded entry-premium cars you can buy today.– Jake HolmesClick here to read our 2019 Volvo V60 review.Get your local price 37 Photos Best laptops for college students: We’ve got an affordable laptop for every student. Best live TV streaming services: Ditch your cable company but keep the live channels and DVR. 2019 Toyota RAV4 is the best it’s been in years 2019 Toyota Avalon puts a bold face forward Share your voice 2019 Hyundai Kona Electric rocks a familiar form 2019 Acura RDXThe third-generation Acura RDX landed for the 2019 model year, offering a number of improvements over its already solid predecessor. More appealing styling, a new turbocharged engine and Acura’s excellent SH-AWD system work together to make the RDX really interesting. Things are nicer inside, too, with a great layout of controls, some of the most comfortable seats in the business and a healthy list of tech offerings. With a base price of $38,395, including $995 for destination, it’s not a bad value for a sporty, entry-luxury crossover.A 2.0-liter, turbocharged four-cylinder powers the RDX, producing a respectable 272 horsepower and 280 pound-feet of torque. The latter is available from just 1,600 rpm on up to 4,500 rpm for peppy acceleration from stops and out of corners, and works with a well-calibrated 10-speed automatic. Spring for the optional adaptive dampers and you’ll have a small crossover that can be entertaining to toss around, or comfortable for normal driving at the push of a button. Acura’s new True Touchpad Interface with a 10.2-inch center screen is in charge of infotainment, and it’s intuitive to use after a short get-to-know period. It’s offered with navigation, a 16-speaker ELS audio system, Wi-Fi hotspot and Apple CarPlay capabilities. For safety, forward collision warning with automatic emergency braking, adaptive cruise, lane keep assist and a multiangle rearview camera come standard.– Jon WongClick here to read our 2019 Acura RDX review.Get your local price 2020 Kia Telluride hits all the high points 2019 Volvo XC40 R-Design has black exterior accents and lava on the floor 2019 Mercedes-Benz A-Class redefines entry level 13 Tags 2019 Audi Q3The 2019 Audi Q3 will land in the US later this year (summer, we’re told) packing some major improvements, and I believe that’ll give this diminutive German the edge over its competitors, many of which are very compelling.Having already sampled the Euro-spec Q3 late last year, I can attest to its prowess in the handling department. Wielding the same 228-horsepower turbo I4 as the Volkswagen Golf GTI, it should also provide enough hustle to back up an on-road demeanor that begs you to have a little fun behind the wheel.The Q3’s starting price of $35,695 including destination nets you some solid standard equipment, including a 10.25-inch gauge cluster display, a panoramic sunroof, dual-zone automatic climate control, two rows of USB ports and automatic emergency braking.– Andrew KrokClick here to read our 2019 Audi Q3 review.Get your local price Car Industrylast_img read more

BSE closes points 15796 up on September 26

first_imgBSE closes points 157.96 up on September 2618K views00:00 / 00:00- 00:00:0000:00BSE closes points 157.96 up on September 2618K viewsBusinessNew Delhi, Sept 26 (ANI): Trading at the Bombay Stock Exchange today closed 157.96 points up to stand at 26,626.32. At the National Stock Exchange the Nifty closed 55.75 points up to stand at 7,967.60. INDIABULLS REAL ESTATE LTD and JPPOWER were among the top gainers of Group A with an increase of 10.78% and 9.79% along with ANDHRA BANK and ALLAHABAD BANK with an increase of 7.79% and 7.37% respectively, while the top losers of Group A include SUZLON and CRISIL with a decrease of 4.98% and 4.47% along with PIRAMAL ENTERPRISES LTD. and PETRONET with a decrease of 3.71% and 3.62% at the close of the markets. The Auto sector is up 151.07 points at 17,835.70 while the banking sector is up 334.79 points at 17,859.88 and the reality sector is up 36.13 points at 1,628.49. The Indian currency is up 0.31% at Rs 61.15 per dollar.Ventuno Web Player 4.50New Delhi, Sept 26 (ANI): Trading at the Bombay Stock Exchange today closed 157.96 points up to stand at 26,626.32. At the National Stock Exchange the Nifty closed 55.75 points up to stand at 7,967.60. INDIABULLS REAL ESTATE LTD and JPPOWER were among the top gainers of Group A with an increase of 10.78% and 9.79% along with ANDHRA BANK and ALLAHABAD BANK with an increase of 7.79% and 7.37% respectively, while the top losers of Group A include SUZLON and CRISIL with a decrease of 4.98% and 4.47% along with PIRAMAL ENTERPRISES LTD. and PETRONET with a decrease of 3.71% and 3.62% at the close of the markets. The Auto sector is up 151.07 points at 17,835.70 while the banking sector is up 334.79 points at 17,859.88 and the reality sector is up 36.13 points at 1,628.49. The Indian currency is up 0.31% at Rs 61.15 per dollar.last_img read more

Darjeeling bids tearful farewell to martyr Jiwan Gurung

first_imgDarjeeling: Rifleman Jiwan Gurung who was martyred in Kashmir was laid to rest with full military honours at Lower Lamahatta in Darjeeling. Thousands thronged to bid the brave heart a tearful farewell.Two explosions had taken place near Roopmati forward post in the Pukkharni area of Nowshera sector in Jammu and Kashmir’s Rajouri district at 4:35 pm and 6:10 pm on Friday. The twin explosions killed Major Shashidharan V Nair (33) and Rifleman Jiwan Gurung (24) along with injuring several others. Also Read – 3 injured, flight, train services hit as rains lash BengalThe IEDs are suspected to have been planted by militants from across the Line of Control. The 24-year-old Rifleman, Jiwan Gurung, hails from Lamahatta in Darjeeling. He had joined the Indian Army four years and 6 months ago. The news of his death was communicated to his family by the platoon commander of 2/1 Gorkha Regiment on Friday night. His mother Poonam Gurung went into a state of shock after hearing the news. His father Kiran Gurung is a government service holder in Arunachal Pradesh. Also Read – Speeding Jaguar crashes into Mercedes car in Kolkata, 2 pedestrians killedKiran Gurung was present to receive the body of his son at Bagdogra Airport on Saturday. “My son is a true hero of India. He has made the supreme sacrifice for the motherland,” stated Kiran Gurung. Army top brass, Ex Servicemen, minister Goutam Deb; Administrative top brass including the District Magistrate of Darjeeling were present at Bagdogra Aiport on Saturday. “We mourn the death of the brave heart. The Chief Minister had called to pay homage on her behalf. It is indeed a very sad day for all of us,” stated Goutam Deb. From Bagdogra, the mortal remains were taken to his native village Singritam, Lower Lamahatta, by road.last_img read more

Intelligent mobile projects with TensorFlow Build your first Reinforcement Learning model on

first_imgOpenAI Gym (https://gym.openai.com) is an open source Python toolkit that offers many simulated environments to help you develop, compare, and train reinforcement learning algorithms, so you don’t have to buy all the sensors and train your robot in the real environment, which can be costly in both time and money. In this article, we’ll show you how to develop and train a reinforcement learning model on Raspberry Pi using TensorFlow in an OpenAI Gym’s simulated environment called CartPole (https://gym.openai.com/envs/CartPole-v0). This tutorial is an excerpt from a book written by Jeff Tang titled Intelligent Mobile Projects with TensorFlow. To install OpenAI Gym, run the following commands: git clone https://github.com/openai/gym.gitcd gymsudo pip install -e . You can verify that you have TensorFlow 1.6 and gym installed by running pip list: pi@raspberrypi:~ $ pip listgym (0.10.4, /home/pi/gym)tensorflow (1.6.0) Or you can start IPython then import TensorFlow and gym: pi@raspberrypi:~ $ ipythonPython 2.7.9 (default, Sep 17 2016, 20:26:04) IPython 5.5.0 — An enhanced Interactive Python. In [1]: import tensorflow as tfIn [2]: import gymIn [3]: tf.__version__Out[3]: ‘1.6.0’In [4]: gym.__version__Out[4]: ‘0.10.4’ We’re now all set to use TensorFlow and gym to build some interesting reinforcement learning model running on Raspberry Pi. Understanding the CartPole simulated environment CartPole is an environment that can be used to train a robot to stay in balance. In the CartPole environment, a pole is attached to a cart, which moves horizontally along a track. You can take an action of 1 (accelerating right) or 0 (accelerating left) to the cart. The pole starts upright, and the goal is to prevent it from falling over. A reward of 1 is provided for every time step that the pole remains upright. An episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center. Let’s play with the CartPole environment now. First, create a new environment and find out the possible actions an agent can take in the environment: env = gym.make(“CartPole-v0”)env.action_space# Discrete(2)env.action_space.sample()# 0 or 1 Every observation (state) consists of four values about the cart: its horizontal position, its velocity, its pole’s angle, and its angular velocity: obs=env.reset()obs# array([ 0.04052535, 0.00829587, -0.03525301, -0.00400378]) Each step (action) in the environment will result in a new observation, a reward of the action, whether the episode is done (if it is then you can’t take any further steps), and some additional information: obs, reward, done, info = env.step(1)obs# array([ 0.04069127, 0.2039052 , -0.03533309, -0.30759772]) Remember action (or step) 1 means moving right, and 0 left. To see how long an episode can last when you keep moving the cart right, run: while not done: obs, reward, done, info = env.step(1) print(obs) #[ 0.08048328 0.98696604 -0.09655727 -1.54009127]#[ 0.1002226 1.18310769 -0.12735909 -1.86127705]#[ 0.12388476 1.37937549 -0.16458463 -2.19063676]#[ 0.15147227 1.5756628 -0.20839737 -2.52925864]#[ 0.18298552 1.77178219 -0.25898254 -2.87789912] Let’s now manually go through a series of actions from start to end and print out the observation’s first value (the horizontal position) and third value (the pole’s angle in degrees from vertical) as they’re the two values that determine whether an episode is done. First, reset the environment and accelerate the cart right a few times: import numpy as npobs=env.reset()obs[0], obs[2]*360/np.pi# (0.008710582898326602, 1.4858315848689436) obs, reward, done, info = env.step(1)obs[0], obs[2]*360/np.pi# (0.009525842685697472, 1.5936049816642313)obs, reward, done, info = env.step(1)obs[0], obs[2]*360/np.pi# (0.014239775393474322, 1.040038643681757)obs, reward, done, info = env.step(1)obs[0], obs[2]*360/np.pi# (0.0228521194217381, -0.17418034908781568) You can see that the cart’s position value gets bigger and bigger as it’s moved right, the pole’s vertical degree gets smaller and smaller, and the last step shows a negative degree, meaning the pole is going to the left side of the center. All this makes sense, with just a little vivid picture in your mind of your favorite dog pushing a cart with a pole. Now change the action to accelerate the cart left (0) a few times: obs, reward, done, info = env.step(0)obs[0], obs[2]*360/np.pi# (0.03536432554326476, -2.0525933052704954)obs, reward, done, info = env.step(0)obs[0], obs[2]*360/np.pi# (0.04397450935915654, -3.261322987287562) obs, reward, done, info = env.step(0)obs[0], obs[2]*360/np.pi# (0.04868738508385764, -3.812330822419413)obs, reward, done, info = env.step(0)obs[0], obs[2]*360/np.pi# (0.04950617929263011, -3.7134404042580687)obs, reward, done, info = env.step(0)obs[0], obs[2]*360/np.pi# (0.04643238384389254, -2.968245724428785)obs, reward, done, info = env.step(0)obs[0], obs[2]*360/np.pi# (0.039465670006712444, -1.5760901885345346) You may be surprised at first to see the 0 action causes the positions (obs[0]) to continue to get bigger for several times, but remember that the cart is moving at a velocity and one or several actions of moving the cart to the other direction won’t decrease the position value immediately. But if you keep moving the cart to the left, you’ll see that the cart’s position starts becoming smaller (toward the left). Now continue the 0 action and you’ll see the position gets smaller and smaller, with a negative value meaning the cart enters the left side of the center, while the pole’s angle gets bigger and bigger: obs, reward, done, info = env.step(0)obs[0], obs[2]*360/np.pi# (0.028603948219811447, 0.46789197320636305) obs, reward, done, info = env.step(0)obs[0], obs[2]*360/np.pi# (0.013843572459953138, 3.1726728882727504)obs, reward, done, info = env.step(0)obs[0], obs[2]*360/np.pi# (-0.00482029774222077, 6.551160678086707)obs, reward, done, info = env.step(0)obs[0], obs[2]*360/np.pi# (-0.02739315127299434, 10.619948631208114) For the CartPole environment, the reward value returned in each step call is always 1, and the info is always {}.  So that’s all there’s to know about the CartPole simulated environment. Now that we understand how CartPole works, let’s see what kinds of policies we can come up with so at each state (observation), we can let the policy tell us which action (step) to take in order to keep the pole upright for as long as possible, in other words, to maximize our rewards. Using neural networks to build a better policy Let’s first see how to build a random policy using a simple fully connected (dense) neural network, which takes 4 values in an observation as input, uses a hidden layer of 4 neurons, and outputs the probability of the 0 action, based on which, the agent can sample the next action between 0 and 1: To follow along you can download the code files from the book’s GitHub repository. # nn_random_policy.pyimport tensorflow as tfimport numpy as npimport gymenv = gym.make(“CartPole-v0”) num_inputs = env.observation_space.shape[0]inputs = tf.placeholder(tf.float32, shape=[None, num_inputs])hidden = tf.layers.dense(inputs, 4, activation=tf.nn.relu)outputs = tf.layers.dense(hidden, 1, activation=tf.nn.sigmoid)action = tf.multinomial(tf.log(tf.concat([outputs, 1-outputs], 1)), 1)with tf.Session() as sess:sess.run(tf.global_variables_initializer()) total_rewards = []for _ in range(1000):rewards = 0obs = env.reset()while True:a = sess.run(action, feed_dict={inputs: obs.reshape(1, num_inputs)})obs, reward, done, info = env.step(a[0][0]) rewards += rewardif done:breaktotal_rewards.append(rewards)print(np.mean(total_rewards)) Note that we use the tf.multinomial function to sample an action based on the probability distribution of action 0 and 1, defined as outputs and 1-outputs, respectively (the sum of the two probabilities is 1). The mean of the total rewards will be around 20-something. This is a neural network that is generating a random policy, with no training at all. To train the network, we use tf.nn.sigmoid_cross_entropy_with_logits to define the loss function between the network output and the desired y_target action, defined using the basic simple policy in the previous subsection, so we expect this neural network policy to achieve about the same rewards as the basic non-neural-network policy: # nn_simple_policy.pyimport tensorflow as tfimport numpy as npimport gymenv = gym.make(“CartPole-v0”) num_inputs = env.observation_space.shape[0]inputs = tf.placeholder(tf.float32, shape=[None, num_inputs])y = tf.placeholder(tf.float32, shape=[None, 1])hidden = tf.layers.dense(inputs, 4, activation=tf.nn.relu)logits = tf.layers.dense(hidden, 1)outputs = tf.nn.sigmoid(logits)action = tf.multinomial(tf.log(tf.concat([outputs, 1-outputs], 1)), 1) cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits)optimizer = tf.train.AdamOptimizer(0.01)training_op = optimizer.minimize(cross_entropy) with tf.Session() as sess:sess.run(tf.global_variables_initializer())for _ in range(1000):obs = env.reset() while True:y_target = np.array([[1. if obs[2] We define outputs as a sigmoid function of the logits net output, that is, the probability of action 0, and then use the tf.multinomial to sample an action. Note that we use the standard tf.train.AdamOptimizer and its minimize method to train the network. To test and see how good the policy is, run the following code: total_rewards = [] for _ in range(1000): rewards = 0 obs = env.reset() while True:y_target = np.array([1. if obs[2] We’re now all set to explore how we can implement a policy gradient method on top of this to make our neural network perform much better, getting rewards several times larger. The basic idea of a policy gradient is that in order to train a neural network to generate a better policy, when all an agent knows from the environment is the rewards it can get when taking an action from any given state, we can adopt two new mechanisms: Discounted rewards: Each action’s value needs to consider its future action rewards. For example, an action that gets an immediate reward, 1, but ends the episode two actions (steps) later should have fewer long-term rewards than an action that gets an immediate reward, 1, but ends the episode 10 steps later. Test run the current policy and see which actions lead to higher discounted rewards, then update the current policy’s gradients (of the loss for weights) with the discounted rewards, in a way that an action with higher discounted rewards will, after the network update, have a higher probability of being chosen next time. Repeat such test runs and update the process many times to train a neural network for a better policy. Implementing a policy gradient in TensorFlow Let’s now see how to implement a policy gradient for our CartPole problem in TensorFlow. First, import tensorflow, numpy, and gym, and define a helper method that calculates the normalized and discounted rewards: import tensorflow as tfimport numpy as npimport gym def normalized_discounted_rewards(rewards):dr = np.zeros(len(rewards))dr[-1] = rewards[-1]for n in range(2, len(rewards)+1):dr[-n] = rewards[-n] + dr[-n+1] * discount_ratereturn (dr – dr.mean()) / dr.std() Next, create the CartPole gym environment, define the learning_rate and discount_rate hyper-parameters, and build the network with four input neurons, four hidden neurons, and one output neuron as before: env = gym.make(“CartPole-v0”) learning_rate = 0.05discount_rate = 0.95num_inputs = env.observation_space.shape[0]inputs = tf.placeholder(tf.float32, shape=[None, num_inputs])hidden = tf.layers.dense(inputs, 4, activation=tf.nn.relu) logits = tf.layers.dense(hidden, 1)outputs = tf.nn.sigmoid(logits) action = tf.multinomial(tf.log(tf.concat([outputs, 1-outputs], 1)), 1) prob_action_0 = tf.to_float(1-action)cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=prob_action_0)optimizer = tf.train.AdamOptimizer(learning_rate) To manually fine-tune the gradients to take into consideration the discounted rewards for each action we first use the compute_gradients method, then update the gradients the way we want, and finally call the apply_gradients method. So let’s now  compute the gradients of the cross-entropy loss for the network parameters (weights and biases), and set up gradient placeholders, which are to be fed later with the values that consider both the computed gradients and the discounted rewards of the actions taken using the current policy during test run: gvs = optimizer.compute_gradients(cross_entropy)gvs = [(g, v) for g, v in gvs if g != None]gs = [g for g, _ in gvs] gps = []gvs_feed = []for g, v in gvs:gp = tf.placeholder(tf.float32, shape=g.get_shape())gps.append(gp)gvs_feed.append((gp, v))training_op = optimizer.apply_gradients(gvs_feed) The gvs returned from optimizer.compute_gradients(cross_entropy) is a list of tuples, and each tuple consists of the gradient (of the cross_entropy for a trainable variable) and the trainable variable. If you run the script multiple times from IPython, the default graph of the tf object will contain trainable variables from previous runs, so unless you call tf.reset_default_graph(), you need to use gvs = [(g, v) for g, v in gvs if g != None] to remove those obsolete training variables, which would return None gradients. Now, play some games and save the rewards and gradient values: with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for _ in range(1000):rewards, grads = [], []obs = env.reset()# using current policy to test play a gamewhile True:a, gs_val = sess.run([action, gs], feed_dict={inputs: obs.reshape(1, num_inputs)})obs, reward, done, info = env.step(a[0][0])rewards.append(reward)grads.append(gs_val)if done:break After the test play of a game, update the gradients with discounted rewards and train the network (remember that training_op is defined as optimizer.apply_gradients(gvs_feed)): # update gradients and do the training nd_rewards = normalized_discounted_rewards(rewards) gp_val = {} for i, gp in enumerate(gps): gp_val[gp] = np.mean([grads[k][i] * reward for k, reward in enumerate(nd_rewards)], axis=0) sess.run(training_op, feed_dict=gp_val) Finally, after 1,000 iterations of test play and updates, we can test the trained model: total_rewards = []for _ in range(100):rewards = 0obs = env.reset() while True:a = sess.run(action, feed_dict={inputs: obs.reshape(1, num_inputs)})obs, reward, done, info = env.step(a[0][0])rewards += rewardif done:breaktotal_rewards.append(rewards)print(np.mean(total_rewards)) Note that we now use the trained policy network and sess.run to get the next action with the current observation as input. The output mean of the total rewards will be about 200. You can also save a trained model after the training using tf.train.Saver: saver = tf.train.Saver() saver.save(sess, “./nnpg.ckpt”) Then you can reload it in a separate test program with: with tf.Session() as sess: saver.restore(sess, “./nnpg.ckpt”) Now that you have a powerful neural-network-based policy model that can help your robot keep in balance, fully tested in a simulated environment, you can deploy it in a real physical environment, after replacing the simulated environment API returns with real environment data, of course—but the code to build and train the neural network reinforcement learning model can certainly be easily reused. If you liked this tutorial and would like to learn more such techniques, pick up this book, Intelligent Mobile Projects with TensorFlow, authored by Jeff Tang. Read Next AI on mobile: How AI is taking over the mobile devices marketspace Introducing Intelligent Apps AI and the Raspberry Pi: Machine Learning and IoT, What’s the Impact?last_img read more