Brain-machine interface technology is already being developed by pioneers and researchers around the globe. It’s still early and today’s tech is fairly rudimentary, but it’s a fast-moving field, and some believe it will advance faster than generally expected. Futurist Ray Kurzweil has predicted that by the 2030s we will be able to connect our brains to the internet via nanobots that will “provide full-immersion virtual reality from within the nervous system, provide direct brain-to-brain communication over the internet, and otherwise greatly expand human intelligence.” Even if the advances are less dramatic, however, they’ll have significant implications.
How might this technology affect human consciousness? What about its implications on our sentience, self-awareness, or subjective experience of our illusion of self?
Consciousness can be hard to define, but a holistic definition often encompasses many of our most fundamental capacities, such as wakefulness, self-awareness, meta-cognition, and sense of agency. Beyond that, consciousness represents a spectrum of awareness, as seen across various species of animals. Even humans experience different levels of existential awareness.
From psychedelics to meditation, there are many tools we already use to alter and heighten our conscious experience, both temporarily and permanently. These tools have been said to contribute to a richer life, with the potential to bring experiences of beauty, love, inner peace, and transcendence. Relatively non-invasive, these tools show us what a seemingly minor imbalance of neurochemistry and conscious internal effort can do to the subjective experience of being human.
Taking this into account, what implications might emerging brain-machine interface technologies have on the “self”?
In this video we have three tips to achieve success and improvement as shown by Elon Musk! In five hundred years, we may look back and say, “Elon Musk was the single most influential person of our entire century.” He has been able to do more in his 46 years than most of us could hope to do in ten lifetimes and yes, part of that is because Elon is incredibly smart and he works incredibly hard. …and we’re so hot up; we had just one computer so the website was up during the day and I was coding at night. …work hard like it mean every waking hour; that’s the thing I would say if your particular if you’re starting a company. But working 100 hours a week still only puts you at two-and-a-half times as much as the average employee.
And most geniuses do not make the international impact that Elon has. In fact, the man with the highest IQ in the world, Chris Langan is a bouncer. So while smarts and work ethic are critical to Elon’s success, in order to impact the world, you simply have to be able to influence other people; there’s no way around it. That includes employees investors and the public at large and Elon Musk is unique here because on this channel, we can normally point to a leader’s ability to nail a public speech.
The power to innovate is falling into the hands of hyper-talented individuals.
Traditionally, the largest and most successful corporations were also the largest employers. Manufacturing and retail businesses required factories, warehouses, logistics and plenty of manpower, all working in harmony to deliver their product or service. Building this capability took years, requiring significant capital investments. Thus, competitors were few and far between, and disruption was painfully slow to make a dent on existing hierarchies.
But with the rise of technology, the model of success has gradually evolved, with businesses requiring fewer and fewer resources and employees to make an impact. Whatsapp is the perfect example; already worth $19bn with only 55 employees. And as we enter the next wave of tech innovation, we’ll increasingly see power transfer away from traditional ‘corporations’ and fall into the hands of smaller groups of highly skilled and hyper-talented individuals.
More, but increasingly complex opportunities
There has never been a more exciting time to be an entrepreneur, with emerging technologies bringing an unprecedented number of opportunities for innovation across platforms and software, with minimal physical resources and infrastructure required. We’re only now beginning to understand the potential of tools such as AI, machine learning, AR, VR, and the Internet of Things, and how they can be combined to create breakthroughs across a whole range of industries and problems.
Yet, identifying and then maximizing these complex and increasingly technical opportunities requires equally specialist knowledge and skills, along with the ability to respond rapidly to new innovations and competition. Understanding and manipulating the most cutting-edge tools requires the best brains, not to mention the drive, resilience and vision to identify the ideas with the most potential. The barriers to entry are rising, placing the power in the hands of those highly capable individuals, who are no longer reliant on building large organizations or physical assets to realize their ambitions.
Size doesn’t equal power
Corporations have always struggled to innovate, lacking the natural agility and flexibility of smaller organizations. However, as we enter this new age of innovation, it is becoming even tougher for the incumbents to keep up with the pace of change and increasing complexity, even with all their manpower and their abundance of cash lying dormant on the balance sheet.
What these big businesses are lacking is the ability to harness the power of the most talented individuals, by providing an environment where they can thrive. Radical change needs mavericks and risk takers who in turn need the freedom and ability to innovate; not be put in a straight-jacket and told to behave and operate according to corporate rules. The most extreme innovators don’t fit into old-fashioned, archaic organizational structures, which means it’s very difficult for big businesses to attract, integrate and retain these individuals.
Investing in these most cutting-edge technologies is also extremely risky, and corporations are too afraid of making mistakes and too busy covering their backs to take a serious punt on ideas that might not build any value. Innovation requires agility and radical thinking, which is impossible in an environment that is paralyzed by politics, an aversion to change and worries of cannibalising its existing revenue streams and product lines. Their only real hopes are spin-offs, joint ventures and acquisitions of the most talented individuals – not in-house innovation.
Supporting the individual
Those who succeed in the next wave of innovation will be those individuals and small teams with the technical skills and a ‘knack’ for understanding the end vision, along with the freedom and agility to explore the unknown. But to have this freedom, these individuals must be adequately supported with resources, networks and capital to take the necessary risks and follow their instincts.
Elon Musk thinks we’re all probably trapped in a “Matrix”-like pseudo existence.
The universe is 13.8 billion years old, so any civilizations that may have arisen throughout the cosmos have had loads and loads of time to hone their technological know-how, the SpaceX founder and CEO explained early this morning (Sept. 7) during a long, wide-ranging and very entertaining appearance on comedian Joe Rogan’s popular podcast, “The Joe Rogan Experience.”
“If you assume any rate of improvement at all, then games will be indistinguishable from reality, or civilization will end. One of those two things will occur,” Musk said. “Therefore, we are most likely in a simulation, because we exist.” [13 Ways to Hunt Intelligent Aliens]
“I think most likely — this is just about probability — there are many, many simulations,” he added. “You might as well call them reality, or you could call them multiverse.”
The “substrate” on which these simulations are running, whatever it may be, is probably quite boring, at least compared to the simulations themselves, Musk further told Rogan.
“Why would you make a simulation that’s boring? You’d make a simulation that’s way more interesting than base reality,” Musk said, citing the video games and movies that humanity makes, which are “distillation[s] of what’s interesting about life.”
The billionaire entrepreneur is far from alone in this interpretation; a number of physicists, cosmologists and philosophers find the simulation hypothesis compelling. If even one advanced alien civilization with a predilection for creating simulations has ever arisen out there, the reasoning goes, then it could theoretically pop off thousands — or perhaps even millions or billions — of “fake” universes. And it would be hard for the inhabitants of these digital realms to figure out the truth, because all the evidence they could gather would likely be planted by the creators.
Indeed, the simulation idea is one of many possible explanations for the famous Fermi paradox, which basically asks, “Where is everybody?” (“Everybody” being aliens, of course.)
Google and Facebook are teaming up to make each company’s artificial intelligence technologies work better together.
The two companies said Tuesday that an unspecified number of engineers are collaborating to make Facebook’s open source machine learning PyTorch framework work with Google’s custom computer chips for machine learning, dubbed Tensor Processing Units, or TPU. The collaboration marks one of the rare instances of the technology rivals working together on joint tech projects.
“Today, we’re pleased to announce that engineers on Google’s TPU team are actively collaborating with core PyTorch developers to connect PyTorch to Cloud TPUs,” Google Cloud director of product management Rajen Sheth wrote in a blog post. “The long-term goal is to enable everyone to enjoy the simplicity and flexibility of PyTorch while benefiting from the performance, scalability, and cost-efficiency of Cloud TPUs.”
Facebook product manager for artificial intelligence Joseph Spisak said in a separate blog post that “Engineers on Google’s Cloud TPU team are in active collaboration with our PyTorch team to enable support for PyTorch 1.0 models on this custom hardware.”
Google first debuted its TPUs in 2016 during its annual developer conference, and pitched them as a more efficient way for companies and researchers to power their machine-learning software projects. The search giant sells access to its TPUs via its cloud computing business instead of selling the chips individually to customers like Nvidia, whose graphics processing units, or GPUs, are popular with researchers working on deep learning projects.
Artificial intelligence technologies like deep learning have grown in popularity over the years with tech giants like Google and Facebook that use the technologies to create software applications that can automatically do tasks like recognize images in photos.
As more businesses explore machine learning technology, companies like Google, Facebook, and others have created their own AI software frameworks, essentially coding tools, intended to make it easier for developers to create their own machine-learning powered software. These companies have also offered these AI frameworks for free in an open source model in order to popularize them with coders.
CAPE CANAVERAL, Fla. — Back in May, I stood near a massive blue countdown clock as the minutes and seconds ticked away to zero. I was surrounded by fellow space reporters, a small group of whom traveled to Florida all the way from Bangladesh. The excitement built and built — and then the launch scrubbed. The Falcon 9 just wasn’t ready to fly that day. But the next day would be a different story.
We all returned to our same viewing spot, adjacent to the countdown clock, and stared across the water at the sleek black-and-white Falcon 9 perched atop its launchpad, Pad 39A at Kennedy Space Center. This time, everything proceeded as expected: When the clock hit zero, smoke billowed and bright flames lit up the sky as the Falcon roared to life. The sound waves that washed over us several seconds later were noticeably louder than expected, even for this experienced launch-watcher. That’s because this wasn’t an ordinary Falcon, but a souped-up version.
Known as the Block 5, this is the final variant of SpaceX’s workhorse — meaning there will be no more major design changes. The design will stay the same from now on to help SpaceX achieve a major goal: rapid reusability. [See all our photos from the Block 5’s two launches]
The company already reuses the first stages of its spacecraft, but this iteration will take that to the next level. Previous versions of the Falcon 9 could be used only two or three times, which is an incredible accomplishment, but not enough for SpaceX founder Elon Musk. His plan is to make rockets more like commercial airplanes, capable of flying many times with no action (other than refueling) taken in between flights. According to Musk, the Block 5, which is a culmination of more than 10 years of development, will do just that.
Block 5 upgrades
To work toward that goal, SpaceX engineers outfitted this turbocharged Falcon with some sweet upgrades over its predecessors. The design changes — which include improved engines, a more durable interstage (the piece that connects the rocket’s two stages), titanium grid fins and a new thermal protection system — will help the booster hold up better to launch stresses. According to SpaceX, each Block 5 can fly 10 times or more times before requiring light refurbishments, and as many as 100 times before the booster is retired.
Musk has said that we will see a Block 5 launch, land and relaunch within the same day sometime next year. As the months tick away and the aerospace company focuses on its big task for the year, launching the first uncrewed test flight of the commercial crew program, that goal remains a lofty one. However, a more reasonable goal — one that SpaceX is close to achieving — is to see the same Block 5 booster launch more than two times in a year.
Following the first Block 5 launch on May 11, which placed Bangladesh’s first satellite — the Bangabandhu-1 — in orbit, SpaceX officials said they didn’t know when the recovered booster would fly again, as they would most likely take it apart and inspect it to make sure it performed as expected. So, it was a bit of a surprise when the company announced that the Bangabhandu-1 booster would fly again on Aug. 7, just 12 weeks later.
That second launch, at Cape Canaveral’s Pad 40, may have been more of a spectacle than the first. It’s like with each launch the booster tries to outshine itself. Sitting in folding chairs on a causeway across the water from the launchpad, a group of space reporters waited. The Milky Way was barely visible overhead. One spectator even brought a telescope, and we peeked at Mars — which shined above like a glowing copper orb — before turning the scope to the launchpad.
The Falcon appeared upside down in the viewfinder but stood ready to launch. We could see what looked like breath emanating from the rocket as the last of the cryogenic fuels that power the rocket were loaded. As the clock hit zero, the night sky lit up bright orange as the Falcon roared to life. Its engines were just as unexpectedly loud as they were the first time. But unlike its first trip to space, which was a bit more dramatic with several holds and a scrub, this flight went off right at the beginning of the window.
As the Falcon climbed to space, the glow from its engines could be seen for several minutes. Surprisingly, after the booster separated from the upper stage and started its descent, far in the distance, we could see the Falcon’s engines ignite for the first of its multiple planned landing burns. Cheers erupted over the loudspeaker as confirmation came in that the booster had touched down on the drone ship a second time.
SpaceX’s successful launch and landing of the Block 5 booster (on the company’s East Coast-based drone ship “Of Course I Still Love You“) with so little time in between is a huge step toward quicker re-flight times.
In another surprising twist, and before the second landing was confirmed a success, the SpaceX launch webcast seemed to suggest that this booster would fly a third time before the end of the year. (Good thing it stuck its landing). Though which mission that booster will be used on hasn’t been announced yet.
Returning to port
A few days after its second flight, the booster stood proudly perched on the deck of the drone ship as it made its way back to port. I stood with a crowd of rocket enthusiasts on the docks, welcoming B1046 (a designation given by SpaceX to identify the booster), some of the space buffs tracking the ship it rode in on to ensure they would get the first glimpse as it peeked over the horizon.
70 Percent Of Consumers Will Be Grocery Shopping Online By 2024
You now have a chance to try Kroger’s self-driving grocery delivery… if you happen to live in the right part of Arizona.
You now have a chance to try Kroger’s self-driving grocery delivery… if you happen to live in the right part of Arizona. The chain has launched its driverless delivery pilot at a single Fry’s Food Stores location in Scottsdale, giving you a chance to receive foodstuffs courtesy of Nuro’s autonomous vehicles. Order through the Fry’s website or app and the robotic courier can deliver either the same day or next day for a $6 flat fee. You’ll have to live in the same 85257 ZIP code, so you can’t make them drive across town just to satisfy your curiosity.
Sadly, you won’t see Nuro’s custom R1 vehicle (above) roll up to your home in the earliest stages of the pilot — it’ll be a modified Toyota Prius instead, and the very first phase will include a just-in-case driver. The purpose-built machines will only start delivering your goods sometime in the fall, after they’ve completed certification and testing.
This isn’t the first self-driving grocery test in the state, as Waymo and Walmart are trialing pick-ups in Phoenix. This is, however, the first real chance many will have for driverless delivery. Just don’t expect to see this service spread far and wide for a while. Autonomous car companies have gravitated toward Arizona both due to its support for truly driverless testing as well as its warm climate and relatively sparse population. You won’t see mass adoption until completely driverless cars are both legal in more states and know how to handle snow, ultra-dense traffic and similar hazards.
We’re a long way from the HAL-9000 (thankfully), but NASA is considering a bigger role for high-end computers in deep-space missions, such as a journey to Mars. To prepare, the International Space Station has been hosting a system built by Hewlett Packard Enterprise (HPE) for the past 11 months. The initial findings, according to HPE: It works without major glitches.
The system, an Apollo 4000-series enterprise server, is considered a “supercomputer” because it can perform 1 trillion calculations per second (one teraflop). That’s not so rare nowadays, but it’s way more computing power than NASA has had in space. Those resources can do complex analysis on large amounts of data that aren’t practical to beam back to Earth.
The key aspect of this test was to see if a standard, off-the-shelf computer could survive the abuse of life in space–especially radiation exposure–using only software modifications.
The computer will get a full evaluation when it returns to Earth later this year, but HPE says it’s already learned three valuable lessons:
Software can protect a system: The Apollo 4000 constantly monitored the performance of key components for possible effects from radiation. Whenever one operated out of parameters, the system hunkered down in idle mode, and then did a full health check before resuming.
You can’t count on the internet: HPE’s software was written assuming near-constant internet access, which is not the reality in space. HPE is considering modifications not just for spaceborne systems but for any running in remote locations. Read More
While there has been a lot of discussion about “what’s left for humans?” as AI improves at exponential rates — the customary answer is that humans need to focus on the things they are uniquely good at, such as creativity, intuition, and personal empathy — I think we now have to ask, “what’s left for firms?”
In many ways this is an old question, because it takes us back to the arguments of Nobel Laureates Ronald Coase and Oliver Williamson that firms exist to coordinate complex forms of economic activity in an efficient way. If computer technology has the capacity to simplify and streamline transaction costs, more and more work can be done through these smart-contract arrangements, making traditional human-managed firms obsolete. For example, when you say to Alexa “order more dog food,” a chain of activities is initiated that leads to the delivery of a fresh supply of Kibble 24 hours later, with little or no human intervention. This work is coordinated by a single firm, Amazon, but it often involves third parties (makers of dog food, delivery companies) whose systems interact seamlessly with Amazon’s.
But is this coordination logic, this ability to internalize transactions to make them more efficient, really the raison d’etre of firms? I would argue that it is just one among many reasons that firms exist. And as computer technology simplifies and reduces transaction costs further, it is these other things that firms do uniquely well that will come more to the forefront. Here are four areas where firms excel.
1. Firms create value by managing tensions between competing priorities.
In today’s parlance, firms have to exploit their established sources of advantage (to make profits today) while also exploring for new sources of advantage (to ensure their long-term viability). However, getting the right balance between these two sets of activities is tricky because each one is to a large degree self-reinforcing. Hence the notion of organizational ambidexterity — the capacity to balance exploitation and exploration.
Artificial intelligence is evidently helping many firms to exploit their existing sources of advantage — whether through process automation, improved problem-solving or quality assurance. Artificial intelligence can also be useful in exploring new sources of advantage: in the famous case of AlphaGo, the winning “strategy” was one that no human player had ever come up with; and computers are increasingly writing new musical scores and painting Picasso-like landscapes.
But AI is not helpful in managing the tension between these activities, i.e. knowing when to do more of one or the other. Such choices require careful judgment — weighing up qualitative and quantitative factors, being sensitive to context, or bringing emotional or intuitive factors into play. These are the capabilities that lie at the heart of organizational ambidexterity and I don’t believe AI can help us with them at all right now. IBM’s recently-announced Project Debater is a case in point: it showed just how far AI has come in terms of constructing and articulating a point of view, but equally how much better humans are at balancing different points of view.
2. Firms create value by taking a long-term perspective.
As a variant of the first point, firms don’t just manage trade-offs between exploitation and exploration on a day to day basis, they also manage trade-offs over time. My former colleagues Sumantra Ghoshal and Peter Moran wrote a landmark paper arguing that, unlike markets, firms deliberately take resources away from their short-term best use, in order to give themselves the chance to create even more value over the long term. This “one step back, two steps forward” logic manifests itself in many ways — risky R&D projects, pursuing sustainability goals, paying above-market wages to improve loyalty, and so on. We actually take it for granted that firms will do many of these things, but again they involve judgments that AI is ill equipped to help us with. AI can devise seemingly-cunning strategies that look prescient (remember AlphaGo) but only when the rules of the game are pre-determined and stable.
An example: the “Innovator’s Dilemma” is that by the time it’s clear an invasive technology is going to disrupt an incumbent firm’s business model, it’s too late to respond effectively. The incumbent therefore needs to invest in the invasive technology before it is definitively needed. Successful firms, in other words, need to be prepared to commit to new technologies in periods of ambiguity, and to have a “willingness to be misunderstood,” in Jeff Bezos’s terms. This isn’t an easy concept for AI to get used to.
3. Firms create value through purpose — a moral or spiritual call to action.
There is a second dimension to long-term thinking, and that is its impact on individual and team motivation. We typically use the term purpose here, to describe what Ratan Tata calls a “moral or spiritual call to action” that leads people to put in discretionary effort — to work long hours, and to bring their passion and creativity to the workplace.
This notion that a firm has a social quality — a purpose or identity — that goes beyond its economic raison d’etre is well established in the literature, from March and Simon through to Kogut and Zander. But it still arouses suspicion among those who think of the firm as a nexus of contracts, and who believe that people are motivated largely through extrinsic rewards.
My view is that you just need to look at charities, open source software movements, and many other not-for-profit organizations to realize that many people actually work harder when money is not involved. And it is the capacity of a leader to articulate a sense of purpose, in a way that creates emotional resonance with followers, that is uniquely human.
Successful firms, in other words, institutionalize a sense of identity and purpose that attracts employees and customers. Ironically, even though blockchain technology is — by definition — about building a system that cannot be hacked, or misused by a few opportunists, people still prefer to put their faith in other people.
4. Firms create value by nurturing “unreasonable” behavior.
There are many famous cases of mavericks who succeeded by challenging the rules, such as Steve Jobs, Elon Musk, and Richard Branson. With apologies to George Bernard Shaw, I think of these people as unreasonable — they seek to adapt the world to their view, rather than learn to fit in. And if we want to see progress, to move beyond what is already known and proven, we need more of these types of people in our firms.
Unreasonableness is antithetical to the world of AI. Computers work either through sophisticated algorithms or by inference from prior data, and in both cases the capacity to make an entirely out-of-the-box leap doesn’t exist. Consider the case of investment management, where robo advisors are not just making trades, they are also providing investment advice to investors, and at a fraction of the cost of human financial advisors. But as the Financial Times said last year, “when it comes to investing, human stupidity beats AI.” In other words, if you want to beat the market, you need to be a contrarian — you need to make investments that go against the perceived wisdom at the time, and you need to accept the risk that your judgment or your timing might be wrong. Both qualities that — at the moment — are distinctively human.
So one of the distinctive qualities of firms is that they nurture this type of unreasonable behavior. Of course, many firms do their best to drive out variance, by using tight control systems and punishing failure. My argument is that as AI becomes more influential, though the automation of basic activities and simple contracts, it becomes even more important for firms to push in the other direction — to nurture unorthodox thinking, encourage experimentation, and tolerate failure.
In a recent Fast Company article, Vitalik Buterin described how all the elements of Uber’s ride-sharing service could be provided through Ethereum-based applications that worked seamlessly with one another: “the whole process is basically as before, but without the middleman [Uber].” This is may be true, but it doesn’t necessarily follow that a computer-mediated service is the better option.
IBM and the U.S. Department of Energy’s Oak Ridge National Laboratory (ORNL) today unveiled Summit, the department’s newest supercomputer. IBM claims that Summit is currently the world’s “most powerful and smartest scientific supercomputer” with a peak performance of a whopping 200,000 trillion calculations per second. That performance should put it comfortably at the top of the Top 500 supercomputer ranking when the new list is published later this month. That would also mark the first time since 2012 that a U.S.-based supercomputer holds the top spot on that list.
Summit, which has been in the works for a few years now, features 4,608 compute servers with two 22-core IBM Power9 chips and six Nvidia Tesla V100 GPUs each. In total, the system also features over 10 petabytes of memory. Given the presence of the Nvidia GPUs, it’s no surprise that the system is meant to be used for machine learning and deep learning applications, as well as the usual high performance computing workloads for research in energy and advanced materials that you would expect to happen at Oak Ridge.
IBM was the general contractor for Summit and the company collaborated with Nvidia, RedHat and InfiniBand networking specialists Mellanox on delivering the new machine.
“Summit’s AI-optimized hardware also gives researchers an incredible platform for analyzing massive datasets and creating intelligent software to accelerate the pace of discovery,” said Jeff Nichols, ORNL associate laboratory director for computing and computational sciences, in today’s announcement.