This blog was co-authored by Robert W Hormuth, Vice President/Fellow, CTO, Server & Infrastructure Systems and Jimmy Pike, Vice President/Fellow, Server Architect, Server & Infrastructure Systems Office of CTO.Trends & Observations can serve two purposes. One, a view into a possible state and two, a reflection of things around us that can lead to disruptions. Sometimes you have to look closely to see the trees and other times far away to see the forest.#1 Customer is KingThe customer it turns out, is indeed always right…something we have all heard many times but often forget in the technology world. The winners this year will be technology companies that truly listen and respond to their customers with products, solutions, and services that actually solve customer problems and result in a better business outcome.#2 The Real Customer Value is in the Data2018 will see companies forced to find value in their data or be disrupted by competition that find ways to mine data to create business value and services. Much of this valuation will be done using ML/DL techniques – see #11. 2018 will see both a heightened level of cyber-attacks and a whole new realm of security embedded in the very foundation of the server to protect a customer’s most valuable asset – see #16.#3 Fabrication EqualityNot a commodity, but rather the various chip makers are at or so near enough to the same process node size that leadership via node size is no longer a differentiator. Thus execution, architectural choices, and proper product definition wins.#4 Competition for CPUs EmergeIntel, AMD, QUALCOMM, Cavium, and IBM emerge with competitive CPU offerings. Going back to Fabrication Equality as an equalizer. This is healthy for the industry as a whole to enable and drive new innovations that solve real customer problems.#5 Memory-Centric ComputingIn 2018 the industry will fully conclude we must embrace memory centric computing. This will open up innovation on a variety of fronts on HW and SW. As more devices (FPGAs, Storage Class Memory, ASICs, GPUs…) move into the microsecond to sub-microsecond domain (see Attack of the Killer Microseconds) we can no longer treat these devices as second class citizens behind a thick protocol stack nor can we software define them without losing their intrinsic value. GenZ is gaining greater industry participation as a truly open standard to address this problem. But the first step in any 10 step program is recognition of a problem.#6 Rise of the Single Socket ServerThe industry has been on a journey from large SMP Machines to scale out for years. We stopped at 2S, quite frankly, due to a lack of a real single socket optimized CPU. With core counts (32) and memory channels (8) continuing to rise a single socket server is more viable than ever. Dell EMC offers 2 single socket AMD EPYC systems for these reasons (PowerEdge R7415 and PowerEdge R6415).#7 Heterogeneous Computing Saves the DayWith CPU performance CAGR flat lining on general purpose CPUs and businesses looking to disrupt in the digital transformation ahead of competition, businesses that want to get ahead and stay ahead will turn more toward specialized computing (GPUs, FPGA, ASICs, SmartNICs). Optimized for these new digital big data problems where ML techniques can be used to find the needle in the sea of data. Moore’s Law remember is an economic law, not a performance law, which basically says if you can extract enough value out of a silicon FAB investment you can continue to shrink your FAB about every 2 years. So, where and how we spend those transistors is shifting.#8 Let’s get RedFish’yThanks for the memories IPMI, after several attempts to standardize infrastructure systems management the industry has finally rallied and succeeded with Redfish. We can thank the founding crew of Dell EMC, HPE, and Emerson for having the vision, patience, and pragmatic approach along with next wave of supporters (MS, VMware, Intel) that took Redfish to the DMTF where a broad set of industry partners are now working together to continue Redfish expansion while SNIA has joined the party with Swordfish for Storage management.#9 Storage Class Memory (SCM) Finds a Home in ServerThe advent of storage class memory will disrupt server applications, operating systems, and hypervisors. We have to remember though we have spent the last 2 decades pushing scale out and stateless computing. Persistence was once frowned upon to enable application agility, but with the advent of real amounts of cost affective and fast enough persistence things will change as the industry figures out how to use this new technology in due time. The first easy use case will be in storage applications, especially software defined storage. Beyond storage, persistent memory will find a home in large in-memory computing and in Memory Centric architectures that can be disaggregated and composed without trapping this valuable resource.#10 Servers are not a CommodityThe notion that Servers have become a commodity seems to come and go. But let’s think about it for minute… Commodity by definition is (1) a raw material or primary agricultural product that can be sold, such as copper or coffee (2) a useful or valuable thing, such as water or time. So let’s think about water by way of example – surely we all agree that water is a basic resource and widely available in modern industrialized countries – but is it really a commodity? Checking the shelves at 7-Eleven would seem to indicate that is NOT the case. There are 20+ types; different bottles, purification differences, additives, and so on… so how the commodity (water) is bottled, sold, distributed, filtered… is vastly different. We pay more per gallon for bottled water than gasoline here in the USA. So, water is a commodity, but bottled water is NOT is the net of the story. Now apply that thinking to servers and you find that compute cycles are the commodity (the water) and the server is bottling of those compute cycles. Now that computing is ubiquitous in every toy, IoT device, mobile device….etc…..that makes compute cycles more or less a raw material of our digital lives. What Servers do is bottle commodity compute cycles. How servers bottle up the compute; add Dram, IO, slots, drives, systems management, high availability, density, redundancy, efficiency, serviced, delivered, and warranted in a wrapper of Security….is how they are not a commodity, but are in fact packaging up the real commodity – compute cycles. The fact that the Super7 hyperscalers have not aligned on a common server form factor solidifies these points. So, while we could all drink water from the Hudson, well…#11 Machine Learning DisruptsBusiness will adopt Machine learning techniques and disrupt or someone else will disrupt them. Hence greater demand for datacenters to become agile, automated and orchestrated while adopting new heterogeneous compute. They will however, begin to recognize that it is a tool and not the answer all problems. This will lead to more practical focus on problems where it excels.#12 Rise of the Edge – Compute Follows the DataComputing demands have always followed the data; from Mainframe-Terminal, to the Client-Server, Mobile-Cloud, and the emerging IoT-Edge Era. The location of compute has always been based on an economic function Fn(cost of compute cycles, size of data, complexity of data, bandwidth costs). Those variables have driven where we compute since the dawn of computing and will continue into the future. The cost of networking will further begin to drive the realization that not only should compute occur at the edge, but data storage as well. Data should be stored as close as possible to the point of creation. Information from the data may be needed elsewhere, or even replication of some of the data may be elsewhere, but not the general rule. Look for more compute at base stations, retail stores, factories, etc….anywhere large amounts of data is created to make business critical decisions or one wants to create a more real time experience for the consumer. This will also spawn the next generation of hybrid cloud via distribution of processing between edge servers / edge data-centers and centralized data-centers/cloud. The goal will be to find the valuable data near the source (where data is generated), minimize the amount of data that needs to be stored at centralized location (public/private cloud), and deliver results most efficiently to where they are needed. Focus on flawless remote operation and administration (no touch required) will become the emerging goal. This will begin the revolution toward truly distributed computing performed and data stored at the edge.#13 Public Cloud | Hybrid Cloud | Private Cloud Find BalanceThe various cloud models will continue to grow and blur the lines between compute consumption models. Companies will realize these are styles of compute and not based on location. As the ease of use equalizes across (CI, HCI), companies will refine their TCO models finding a need for all three consumption models across different needs. Multi-tenant nature and value of data will continue to raise security concerns in the Public Cloud.#14 Software Continues to Eat HardwarePersonally, as Server dudes, we love Software, and Wirth’s law is fantastic J (Wirth’s law, also known as Page’s law, Gates’ law and May’s law, is a computing adage which states that software is getting slower more rapidly than hardware becomes faster). The evolution of infrastructure and software platform models continue adding abstraction. From MaaS (Metal as a Service) to IaaS (Infrastructure as a Service) to SaaS (Software as a Service) to PaaS (Platform as a Service) to CaaS (Container as a Service) to the new FaaS (Function as a Service). The goal of all of these models is to continue SW abstractions to aide in application agility, development speed (devOps), deployment, orchestration, and management of application lifecycles. FaaS is positioned to be quickly adopted for green field applications while CaaS will likely take over as the predominate deployment within an IaaS or PaaS environment for legacy applications. Now the funny thing, especially in the machine learning space, we see more and more MaaS pick up to eke out every last bit of performance. You know the old saying, what is new is old and old is new again.#15 SSD/NVMe in Enterprise Continue Rapid AdoptionNeed more be said….NVMe & SSD’s will displace rotating disks in Servers. From boot drives to high performance IOP monsters to super capacity. They simply make sense and the cost points/sizes make it a no brainer given the gains. And case in point, NVMe SSD have already reached price parity with SAS SSD.#16 Security Must be End-to-End2018 will see a definite shift in terms of security and the continuation of 2017 initiatives. For example, the Dell PowerEdge 14G server family now has a cryptographic security architecture where part of a key value pair is immutable, unique, and set in the hardware during the system fabrication process. This method provides an indisputable root of trust embedded in the hardware which eliminates the “man in the middle” opportunity all the way from manufacture of server to delivery to customer, and from power-on to the transfer of control to the operating system. The term security, seems incomplete considering the scope of today’s need especially in lights of recently exposed security holes present in all modern CPU architectures. 2018 will see security expand to what is better termed as system-wide protection, integrity verification and automated remediation. While impenetrability is always the objective, with the increasing complexity and sophistication of attackers, it is very likely that additional vulnerabilities and exploits will emerge. As recently seen, remediation can be extremely costly in terms of performance causing a reemergence of single tenancy in some environments. One of the 2018 objectives will be holding a successful intrusions harmless. In other words, if someone can get into the platform, making sure they cannot obtain meaningful information or do damage. This will lead to a more intense trust strategy based on more identity management. Identity at all level (user, device, and platform) will be a great focus and require a complete end-to-end trust chain for any agency that is able to install executables on the platform and policy tools for ensuring trust. This will likely include options based on block chain. Emerging standards like Gen-Z where keys are embedded in the transaction layer will also be required. In open environments where any user can run code, this struggle for “who is ahead” is likely to continue. Greater focus on encryption will emerge requiring any data at rest to be encrypted. (However, even this does not eliminate the risk associated with recent CPU vulnerabilities.) System designers will be forced to trade complications associated with data management and loss of features like deduplication against risk and will cause reconsideration of many software defined strategies to be compared to what is available on focused systems.#17 Composable MarketingHype stays ahead of reality. Composability was a big buzz word in 2017. Unfortunately, as blogged by Dell EMC , the hype is ahead of reality until we get new architectures in place that allow true composability via disaggregation that enable memory centric computing vs CPU centric computing for these new classes of microsecond devices. The industry is on the right path with GenZ but we are still a ways out.
EX300: With a minimum starting capacity of just 60TB (83% lower than previous ECS platforms), the EX300 is the perfect storage platform for cloud-native initiatives, such as apps built using Pivotal Cloud Foundry (PCF). As these initiatives gain traction and need more storage capacity, ECS can scale seamlessly with them. The EX300 is also well suited for modernizing Dell EMC Centera environments.EX3000: Packing up to 8.6 PB in a single rack, the EX3000 is designed to support organizations with a large data footprint. This makes it the ideal solution for archiving warm and cold data, building internal cloud-storage portals, and supporting large cloud-native apps as well as Internet of Things (IoT) or analytics initiatives. Due to its superior storage density (almost 50% higher than previous ECS platforms), the EX3000 is a great fit for datacenter consolidation efforts.Over 1,000 organizations around the world are utilizing ECS for their digital initiatives. As an example, the Charles Stark Draper Laboratory, a leading research and engineering firm, has been able to reduce capital expenditures by 30% and improve IT efficiency by 75% thanks to ECS. You can check out their story in this video.Video Playerhttps://www.dellemc.com/resources/en-us/asset/presentations/Draper_with_Dell_EMC_ECS_Foundation_for_Innovation.mp400:0000:0000:00Use Up/Down Arrow keys to increase or decrease volume.ECS has been recognized by the analyst community as well. Gartner has recognized Dell EMC as a leader in the Magic Quadrant for Distributed File Systems and Object Storage. Scott Sinclair, senior analyst at Enterprise Strategy Group (ESG), also acknowledges the value ECS can bring to customers: “Organizations today require more from their data storage infrastructure in terms of capacity, performance, and resiliency than ever before. The real challenge, however, is addressing these needs, staying in budget, and simultaneously positioning the organization for success as a more emergent set of workloads like Machine Learning, IoT, and Analytics come online. With the new ECS EX-Series, I believe Dell EMC will help organizations overcome these challenges with an affordable, flexible, and scalable unstructured storage solution.” The ECS EX-series provides organizations with unparalleled flexibility in starting with just the capacity they need, and growing as their needs change in the future. Additionally, an investment in ECS is protected by the Dell EMC Future-Proof Loyalty Program, which provides benefits like the 3-year Satisfaction Guarantee, Hardware Investment Protection, Never Worry Data Migrations and All Inclusive Software.To learn more about how ECS can support your digital initiatives, visit the ECS website, or follow @DellEMCStorage on Twitter. One of the favorite parts of my role at Dell EMC is that I get to speak to our incredible customers almost every day. It is apparent from these conversations that data is playing an increasingly important role in determining whether organizations will thrive in a digital-first world or not. To truly unlock the value of their data, however, organizations need a fundamentally different approach to IT, as traditional infrastructure was not built to handle the magnitude of data that is generated by businesses today. This is precisely why we built Dell EMC ECS, the modern object-storage platform that brings cloud scale and economics to our customers’ datacenters.Today, at VMworld 2018, we are introducing the EX-Series, ECS’ next-generation platform. The EX-Series enhances the flexibility of ECS through two brand new offerings that are designed to meet the needs of an expanded set of use cases and organization sizes:
Baker’s Half Dozen – Episode 2: Will Autonomous Vehicles Take Our Jobs? Or Strengthen the Human-Machine Partnership?
Episode 2 of Baker’s Half Dozen is upon us. This month Matt answers:Is it the hardware driving AI acceleration, or do we need to write better code?Will autonomous vehicles take our jobs or strengthen human-machine partnerships?Are mid-course dividends worth the cost of failed innovation?If you’ve got questions about this episode, or a question you’d like Matt to answer in the next episode, comment below or tweet @mattwbaker using #BakersHalfDozen.Episode 2 Show Notes:Introduction with Matt BakerItem 1 – Self-Driving Threatening DriversWSJ: Self-Driving Technology Threatens 300,000 Trucking JobsItem 2 – AV Solution DifficultiesElon Musk: Generalized AV capabilities will be difficult in the near termGartner Hype CycleLane KeepingRadar-based Cruise ControlItem 3 – 3 V’s of Big DataItem 4 – AI without GPUsWu Feng VTItem 5 – AWS/VMWareItem 6 – UT TACC FronteraTACC Frontera to Push the Frontiers of ScienceItem 6.5 – Deploying OpenStackCloseDisagree with Matt using #BakersHalfDozenAlso, agree with Matt using #BakersHalfDozen
Have you ever scaled a 20-foot tree, hung off the side of a skyscraper, been 700 meters underground, or labored on a ship for 36 hours straight? How about run diagnostics on a 15-foot electricity pole, taken pipeline readings in zero-degree weather, or checked refinery equipment in a thunderstorm?It’s hard to imagine that this is a normal day of work for some people….and it’s for this reason that Dell is excited to celebrate them by sharing the Top 20 Most Rugged Jobs in America.From police officer, to commercial fishing specialist, to oil & gas engineer, and lumberjack, the people who hold these rugged jobs encounter the most extreme physical and environmental elements — all in a day’s work.Dell’s Rugged testing lab team know these conditions almost as intimately as the job holders themselves because they are in charge of taking Dell Rugged Devices off-road and into the field with the workers who rely on them to get the most rugged jobs done. Anthony Bundrant, head of the Dell Rugged Labs testing facility works with his team to durability test these specialty devices to withstand:Temperatures hot enough to fry an egg and cold enough to freeze an ice cubeStormy winds up to 70 miles per hour and nearly 6 inches of rain per hour40 mile-per-hour sandstormsDell’s specialty Rugged laptops and tablets are purpose-built, designed and tested to the point of failure. They can withstand the rigor of the most extreme environments and harshest temperatures. Dell’s Rugged Lab tests each system to meet or exceed industry standards to set the durability and performance bar higher.We work closely with law enforcement, the military and many other private sector industries like oil & gas and manufacturing, which all require durable, high performance equipment that can take a spill in the field and keep going. Our Rugged laptops and tablets are built to survive the rigors of the real world — especially in challenging and unpredictable environments, rain or shine.Do you have what it takes to work some of the most rugged jobs in America? Each job on the Top 20 list was scored against three key factors, including physical labor, injury risk, and environmental exposure. Check out the list below to see which careers made the grade! Operating Thermal Range: -20F to 145F (Rugged Extreme Notebooks & Tablet). Based on independent 3rd party testing IP-65 rated for maximum protection against water, dust, and dirt ingress (Rugged Extreme Notebooks & Tablet). Based on independent 3rd party testing.
BERLIN (AP) — A closely-watched annual study by an anti-graft watchdog organization suggests that countries with the least corruption have been best positioned to weather the health and economic challenges of the coronavirus pandemic. Transparency International’s 2020 Corruption Perceptions Index released Thursday concluded that countries that performed well invested more in health care and were “better able to provide universal health coverage and are less likely to violate democratic norms.” Transparency’s chief says: “COVID-19 is not just a health and economic crisis, it is a corruption crisis _ and one that we are currently failing to manage.”
COLUMBIA, S.C. (AP) — Some conservatives in South Carolina say a bill that passed the state Senate banning most abortions is a big step for them, but isn’t the end of their efforts. The ultimate goal of those groups is what’s called a “personhood law,” which would dictate that life begins at conception. That would give a fetus the rights of any citizen and require “due process of law” to end its life under the U.S. Constitution. But most groups say they’ll wait to make sure the latest bill passes the House and is signed into law before discussing their next steps to push for further restrictions.