Full Case Study: IBM’s Extensive Marketing Strategy & Top 8 Competitors

case study on ibm

By Aditya Shastri

The International Business Machine Corporation was established in 1911 as a Computer Tabulating Company. It changed its name to IBM in 1924. IBM largely deals in Hardware, Software, Consultancy, and Hosting services. IBM has had an extensive journey so far, having managed to stay in the market for almost a century now. 

In the IBM case study, we shall talk about IBM’s marketing strategy, marketing mix, competitors’ analysis, BCG matrix, marketing campaigns, and social media marketing presence. So without further ado, let’s get started by getting to know the company a little better.

case study on ibm

IBM is a multinational technology corporation that specialises in hardware, software, cloud-based services, and cognitive computing. It is headquartered in New York, United States, and has five strategic business units – Financing, Systems and Technology, Technology service, Business Services, and Software.

IBM is a top producer and manufacturer of computer software and hardware. It is also behind great innovations like:

  • Automated Teller Machine (ATM)
  • Floppy Disk 
  • Magnetic Strip Card
  • Hard Disk Drive 
  • Relational Database 
  • UPC Barcode 
  • SQL Programming Language 
  • Dynamic Random Access Memory (DRAM)

IBM has consistently identified the upcoming technological needs and come up with innovative solutions that always leave a mark. Let’s start delving into the IBM case study by first learning about its marketing mix.

IBM’s Marketing Mix

The marketing mix refers to a company’s range of tactics strategies, for promoting its product or service in the marketplace. Price, Product, Promotion, and Place are the four Ps that make up a traditional marketing mix. Following is IBM’s Marketing Mix:

IBM’s Product Strategy

IBM offers a diverse range of goods and services. Cognitive solutions, global business services, technology services & cloud platforms, systems, global financing, distributed computing, psychological registering, information and analysis, IT framework, and Internet of Things are some of IBM’s offerings. IBM’s Cloud Data Encryption Services (ICDES) is a one-of-a-kind solution that uses sophisticated technology to secure client data.

The Technology Services & Cloud Platforms product section also includes infrastructure services, technical support services, and integration tools. Global Finance also provides customer financing, industrial financing, and remanufacturing and remarketing facilities. The emphasis on these core product lines is reflected in IBM’s organisational structure. In the organisational structure of the company, each product line is represented as a division.

IBM’s Place Strategy

This part of the marketing mix identifies the channels, or places, by which IBM’s products are distributed. These locations have an impact on the company’s strategic success in targeting customers. In this case, IBM transacts with consumers and distributes its goods through the following networks:

  • Official Website
  • Business Partners
  • Delivery Centers
  • Warranty Service Providers

Customers can find useful information about IBM’s products on the company’s official website. The website is an easy way to connect with potential clients all over the world. Customers may also use the official website to build and pay for accounts to use the company’s cloud-based services. Business partners, on the other hand, are the company’s standard method of reaching its target market. Some of the company’s products, such as Global Process Services,  are delivered via delivery centres. In addition, the company has warranty service providers for existing customers’ device repair and servicing needs.

IBM’s Pricing Strategy

For its information technology products, IBM uses the following pricing strategies:

  • Market-oriented pricing strategy
  • Value-based pricing strategy

The market-oriented pricing strategy aims to set prices that are comparable to current prices in the information technology industry for certain goods. For example, IBM’s online products, such as cloud platform services, are competitively priced, owing to the high level of competition and price sensitivity that other products in the cloud-based services market.

The value-based pricing approach, on the other hand, is seen in some of the company’s product lines. The aim of this strategy is to assess appropriate prices and price ranges based on how IBM’s goods are perceived and needed by customers. For example, the value-based pricing approach is used to price the company’s customised business machines for restaurant chains. This part of the marketing mix is influenced by the cost leadership strategy and the market penetration intensive strategy.

IBM’s Promotion Strategy

Following are the tools IBM uses for promotion:

  • Advertising (primary)
  • Direct marketing (primary)
  • Sales promotion
  • Personal selling
  • Public relations

IBM’s products are primarily promoted through advertising. The business advertises in both print and online media, including famous news websites. Direct marketing, on the other hand, entails direct contact between a company and its corporate customers, especially when delivering new goods to existing customers. For example, IBM sends emails to companies who already use its systems and services about new products. Furthermore, sales promotion is used on occasion to maximise the company’s share of the information technology industry. Discounts and free trials, for example, are provided to entice more consumers to try out any of the company’s offerings, such as cloud-based services.

Personal selling is used to cater to the needs of individual consumers, such as those in small towns. This communication strategy is often used to promote direct marketing.  Sponsorships of activities are a part of public relations. These promotional activities show that IBM relies heavily on advertisements, but also on other forms of communication for this part of the marketing mix.

The marketing mix, thus, shows the company’s comprehensive 4Ps strategy. Next up in the IBM case study, we take a look at its competitors.

IBM’s Competitors  

IBM works in an industry that has a huge number of competitors and constant entrants. Below are some of IBM’s biggest competitors:

  • Hewlett-Packard (HP): Information and Technology Company in California
  • Xerox: Producer and seller of print and digital products in Connecticut
  • Accenture: Computer Services and Solutions Company in Dublin, Ireland
  • Oracle: Advanced Technology Solutions in California
  • DXC Technologies: Modernizing IT processes, ensuring system security, scalability, and cloud optimization
  • Dell Technologies: Provides the latest computer and technology solutions 
  • ODM Direct: Provider of cloud services
  • Inspur: Provider of cloud computing, big data, key application hosts, servers, storage, artificial intelligence, and ERP

case study on ibm

As we can see from the above infographic from 2018, IBM has the highest share (9.2%) in the AI services industry amongst its biggest competitors worldwide. This shows us that IBM is on the right track and holds a huge chunk of market share, indicating its competitive success.

Now that we thoroughly understand the company, its business, and market position, let us finally get into its marketing strategy.  

IBM’s Marketing Strategy

IBM’s marketing strategy includes significant investments in both conventional and online advertisements, as well as promotional budgets. All of which are used to keep prospective consumers informed about the company’s constantly changing product lines and to strengthen brand recognition. Equally significant is their track record of investing millions in hiring, building, and compensating one of the most experienced sales teams in the industry.

Talking about IBM’s STP (segmentation, targeting, and positioning) strategy, which is a huge part of a company’s marketing plans, IBM’s market segmentation variables include psychographic, spatial, and demographic factors. IBM employs a differentiated marketing strategy to target specific products and services available to their customers based on their needs. The company positions itself as an organisation that generates value for its stakeholders through the value distribution chain by employing a user benefit-based positioning strategy. IBM has always emphasised on differentiating themselves through their consistent value proposition and innovation.

Another analysis that helps in understanding a company and its success, is the BCG matrix. Let’s take a look at IBM’s BCG matrix.

IBM’s BCG Matrix

case study on ibm

Boston Consulting Group’s (BCG) product standing matrix is also known as the growth-share matrix. It helps to know how the products are doing in the market and how they could be improved. The BCG matrix has 4 quadrants as shown in the picture below. The products in the star quadrant have high industrial growth and a high market share.

The products in the question mark quadrant have high industrial growth but low market share. The products in the cow quadrant have a high market share but low industry growth and in the end, the pet/dog quadrant has products with low market share and low market growth. After understanding what a BCG matrix stands for, let’s discuss IBM’s BCG matrix.

IBM offers different plans to simplify the various set of processes, with the help of its five Strategic Business Units (SBUs), as we’ve learnt before.

  • The Technology services offer IT infrastructure and integrated technology services. It is in the Stars category in the BCG Matrix
  • The Business segment deals in consulting and application management services. This also is in the Stars category in the BCG Matrix
  • The other segments are still in the question mark category in the BCG Matrix as there is a lot of competition in these industries

A large part of a company’s marketing involves delivering successful marketing campaigns. Let us talk about some of IBM’s marketing campaigns.

IBM’s Marketing Campaigns

Over the many years of its long journey, IBM has produced an extensive number of marketing campaigns. In this segment of the IBM case study, we talk about two of its most memorable ones.

IBM’s “Code and Response”

case study on ibm

Earthquakes, floods, cyclones, etc, there are many cases of natural disasters around the world and similarly in South America. IBM hence wanted to create a center that deals with nature’s protection and security. Developers were selected from all over the country to take action and create creative solutions that help avoid damage. Community advocacy, open-source support, and innovation were the heart of the enterprise and the campaign. This campaign increased IBM’s brand awareness by 908 million people on World Humanitarian Day. With 100,000+ programmers from 156 nations, the campaign was a definitive success.

IBM’s “Smarter Planet”

case study on ibm

The world is getting smarter with new technology and AI. Today, technology is a part of almost everything we do. IBM came up with its campaign “Smarter Planet” which had the same idea. Its vision was to make healthcare, retail, finance, transportation, cities, and other fields ‘smart’ and hence, better with digital technology.

Following are some of its Smarter Planet ads:

case study on ibm

Following were the few results of this campaign:

  • IBM worked with the Stockholm city authorities to design and implement a congestion-management system. Within 4 years, it substantially reduced traffic congestion at peak- and non-peak times, vehicle emissions, and driver delays; and increased the use of public transportation.
  • IBM developed Syracuse University’s Green Data Centre (GDC). It aimed to use advanced techniques in buildings design and management, energy generation, cooling technologies, and IT system management. The GDC uses half the energy consumption from before and produces outstanding results. 
  • A telemedicine initiative was made to provide advanced healthcare to patients in rural Louisiana, whose access to healthcare services has been limited. It reduced duplicate testing by 93%.

These were two of IBM’s most successful campaigns that have brought about the exact outcomes they aimed to achieve. In the last segment of its blog, we shall discuss IBM’s social media marketing.

IBM’s Social Media Marketing Presence

It is imperative for a technology company to have an active digital presence. This not only aids its marketing efforts but also establishes a distinct brand image in the consumers’ minds. Let’s analyse IBM’s social media marketing presence.

  • IBM has been an early user of social networking platforms, even before the spread of Twitter.
  • IBM today has several Twitter accounts to serve different types of customers.
  • The company is also very active on Facebook. Their primary page keeps posting general information and the latest news on IBM. The other pages talk about more general topics like social business and career development.

case study on ibm

  • IBM’s Instagram page highlights its creative imagination. It basically gives us a peek into the “behind the scenes” of the organization. Not only do they post gorgeous pictures showing their products and corporate culture, but also they post pictures of their different offices from all around the world. The audience is encouraged to caption the page’s photos and to give a message to their employees. This ensures interactive and engaging content.

case study on ibm

  • IBM has several YouTube channels, a Vine account, and a LinkedIn page. It has 37,000+ followers on Google+ and has a collection of Pinterest boards on topics like Women in Tech, Big Data, and IBM History. 

case study on ibm

Clearly, from IBM’s first encounter with social media to their crafty use of all the newest platforms and features today, IBM has carved its own unique presence in the social arena.

IBM is an organization centered around innovation, not only with its products but also with its marketing and organisational structures. It has correctly identified its target markets and worked to provide them with quality services. All in all, IBM has correctly identified the needs of the people and organizations, made breakthrough products accordingly, and marketed them in such a way that they’re doing exceptionally well even today. 

Thank you for reading our IBM case study. We hope you found what you were looking for and learnt more about IBM and its marketing. If you did, kindly comment down below and let us know!

case study on ibm

Author's Note: My name is Aditya Shastri and I have written this case study with the help of my students from IIDE's online digital marketing courses in India . Practical assignments, case studies & simulations helped the students from this course present this analysis. Building on this practical approach, we are now introducing a new dimension for our online digital marketing course learners - the Campus Immersion Experience. If you found this case study helpful, please feel free to leave a comment below.

IIDE Course Recommendation

Liverpool Business School

" * " indicates required fields

Get Syllabus

By providing your contact details, you agree to our Terms of Use & Privacy Policy

Aditya Shastri

Lead Trainer & Head of Learning & Development at IIDE

Leads the Learning & Development segment at IIDE. He is a Content Marketing Expert and has trained 6000+ students and working professionals on various topics of Digital Marketing. He has been a guest speaker at prominent colleges in India including IIMs...... [Read full bio]

mahnaz

hello Thanks for the very good content about…. You helped me a lot

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Submit Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Related Posts

Business Model of PayPal: A Comprehensive Case Study

Business Model of PayPal: A Comprehensive Case Study

by Aditya Shastri | Aug 26, 2024

Quick Read   Explore the business model of PayPal, a global leader in online payments....

Extensive Business Model of Patanjali

Extensive Business Model of Patanjali

Quick Read   The business model of Patanjali illustrates a unique blend of traditional...

Case Study on the Valuable Business Model Of VISA with Full Explantation

Case Study on the Valuable Business Model Of VISA with Full Explantation

by Aditya Shastri | Aug 20, 2024

Quick Read   Explore the business model of Visa Inc., a global leader in payment technology....

I’m Interested in This Masterclass

MBA Knowledge Base

Business • Management • Technology

Home » Management Case Studies » Case Study of IBM: Employee Training through E-Learning

Case Study of IBM: Employee Training through E-Learning

“E-learning is a technology area that often has both first-tier benefits, such as reduced travel costs, and second-tier benefits, such as increased employee performance that directly impacts profitability.” – Rebecca Wettemann, research director for Nucleus Research

In 2002, the International Business Machines Corporation (IBM) was ranked fourth by the Training magazine on it’s “The 2002 Training Top 100”. The magazine ranked companies based on their commitment towards workforce development and training imparted to employees even during periods of financial uncertainty.

case study on ibm

Since its inception, IBM had been focusing on human resources development : The company concentrated on the education and training of its employees as an integral part of their development. During the mid 1990s, IBM reportedly spent about $1 billion for training its employees. However, in the late 1990s, IBM undertook a cost cutting drive , and started looking for ways to train its employees effectively at lower costs. After considerable research, in 1999, IBM decided to use e-Learning to train its employees. Initially, e-Learning was used to train IBM’s newly recruited managers.

IBM saved millions of dollars by training employees through e-learning. E-Learning also created a better learning environment for the company’s employees, compared to the traditional training methods . The company reportedly saved about $166 million within one year of implementing the e-learning program for training its employees all over the world. The figure rose to $350 million in 2001. During this year, IBM reported a return on investment (ROI)’s of 2284 percent from its Basic Blue e-Learning program. This was mainly due to the significant reduction in the company’s training costs and positive results reaped from e-learning. Andrew Sadler, director of IBM Mindspan Solutions, explained the benefits of e-learning to IBM, “All measures of effectiveness went up. It’s saving money and delivering more effective training,’ while at the same time providing five times more content than before.” By 2002, IBM had emerged as the company with the largest number of employee’s who have enrolled into e-Learning courses.

Though there were varied opinions about the effectiveness of e-Learning as a training tool for employees, IBM saw it as a major business opportunity and started offering e- ­learning products to other organizations as well. Analysts estimated that the market for e-Learning programs would grow from $2.1 billion in 2001 to $33.6 billion in 2005 representing a 100 percent compounded annual growth rate (CAGR).

Background Note

Since the inception of IBM, its top management laid great emphasis on respecting every employee. It felt that every employee’s contribution was important for the organization. Thomas J. Watson Sr. (Watson Sr.), the father of modern IBM had once said, “By the simple belief that if we respected our people and helped them respect themselves, the company would certainly profit.” The HR policies at IBM were employee-friendly. Employees were compensated well – as they were paid above the industry average. in terms of wages. The company followed a ‘no layoffs’ policy. Even during financially troubled periods, employees were relocated from the plants, labs and headquarters, and were retrained for careers in sales, customer engineering, field administration and programming.

To widen their knowledge base and broaden their perspectives, managers were also sent for educational programs to Harvard, the London School of Economics, MIT and Stanford. Those who excelled in these programs were sent to the Advanced Managers School, a program offered in about forty colleges including some in Harvard, Columbia, Virginia, Georgia and Indiana. IBM’s highest-ranking executives were sent to executive seminars, organized at the Brookings Institutions this program typically covered a broad range of subjects including, international and domestic, political and economic affairs. IBM executives were exposed to topical events with a special emphasis on their implications for the company.

In 1997, Louis Gerstner (Gerstner), the then CEO of IBM , conducted a research to identify the unique characteristics of best executives and managers. The research revealed that the ability to train employees was an essential skill, which differentiated best executives and managers. Therefore, Gerstner aimed at improving the managers’ training skills. Gerstner adopted a coaching methodology of Sir John Whitmore, which was taught to the managers through training workshops.

IBM trained about 5000 new managers in a year. There was a five-day training program for all the new managers, where they were familiarized with the basic culture, strategy and management of IBM. However, as the jobs became more complex, the five-day program turned out to be insufficient for the managers to train them effectively. The company felt that the training process had to be continuous and not a one-time event.

Gerstner thus started looking for new ways of training managers. The company specifically wanted its management training initiatives to address the following issues:

  • Management of people across geographic borders
  • Management of remote and mobile employees
  • Digital collaboration issues
  • Reductions in management development resources
  • Limited management time for training and development
  • Management’s low comfort level in accessing and searching online HR resources

Online Training at IBM

In 1999, IBM launched the pilot Basic Blue management training program, which was fully deployed in 2000. Basic Blue was an in-house management training program for new managers. It imparted 75 percent of the training online and the remaining 25 percent through the traditional classroom mode. The e-Learning part included articles, simulations, job aids and short courses.

The founding principle of Basic Blue was that ‘learning is an extended process, not a one-time event.” Basic Blue was based on a ‘4- Tier’ blended learning model’. The first three tiers were delivered online and the fourth tier included one ­-week long traditional classroom training. The program offered basic skills and knowledge to managers so that they can become effective leaders and people-oriented managers.

In the second tier, the managers were provided with simulated situations. Senior managers trained the managers online. The simulations enabled the managers to learn about employee skill-building, compensation and benefits, multicultural issues, work/life balance- issues and business conduct in an interactive manner. Some of the content for [his tier was offered by Harvard Business School and the simulations were created by Cognitive Arts of Chicago. The online Coaching Simulator offered eight scenarios with 5000 scenes of action, decision points and branching results. IBM Management Development’s web site, Going Global offered as many as 300 interactive scenarios on culture clashes.

In the third tier, the members of the group started interacting with each other online. This tier used IBM’s collaboration tools such as chats, and team rooms including IBM e-Learning products like the Team-Room, Customer-Room and Lotus Learning Space. Using these tools, employees could interact online with the instructors as well as with peers in their groups. This tier also used virtual team exercises and included advanced technologies like application sharing, live virtual classrooms and interactive presentation: on the web. In this tier, the members of the group had to solve problems as a team by forming virtual groups, using these products. Hence, this tier focused more on developing the collaborative skills of the learners.

The tremendous success of the Basic Blue initiative encouraged IBM to extend training through e-Learning to its-sales personnel and experienced managers as well. The e-Learning program for the sales personnel was known as ‘Sales Compass,’ and the one for the experienced managers, as ‘Managing@ IBM.’ Prior to the implementation of the Sales Compass e-Learning program, the sales personnel underwent live training at the company’s headquarters and training campuses. They also attended field training program, national sales conferences and other traditional methods of training. However, in most of the cases these methods proved too expensive, ineffective and time-consuming. Apart from this, coordination problems also cropped up, as the sales team was spread across the world. Moreover, in a highly competitive market, IBM could not afford to keep its sales team away from work for weeks together.

Though Sales Compass was originally started in 1997 on a trial basis to help the sales team in selling business intelligence solutions to the retail and manufacturing industries, it-was not implemented on a large scale. But with the success of Basic Blue, Sales Compass was developed further. The content of the new Sales Compass was divided into five categories including Solutions (13 courses), industries (23 courses), personal skills (2 courses), selling skills (11 courses), and tools and job aid (4 aids).

It also enabled the sales people to sell certain IBM products designed for Customer Relationship Management (CRM) , Enterprise Resource Planning (ERP) , Business Intelligence (BI) , and so on. Sales Compass also trained the sales personnel on skills like negotiating and selling services. Like the Basic Blue program, Sales Compass also had simulations for selling products to a specific industry like banking, about how to close a deal, and so on. It also allowed its users to ask questions and had links to information on other IBM sites and related websites.

Sales Compass was offered to 20,000 sales representatives, client relationship representatives, territory representatives, sales specialists, and service professionals at IBM. Brenda Toan (Toan), global skills and learning leader for IBM offices across the world, said, “Sales Compass is a just-in-time, just-enough sales support information site. Most of our users are mobile. So they are, most of the times, unable to get into a branch office and obtain information on a specific industry or solution. IBM Sales Compass provides industry-specific knowledge, advice on how to sell specific solutions, and selling tools that support our signature selling methodology, which is convenient for these users.”

By implementing the above programs, IBM was able to reduce its training budget as well as improve employee productivity significantly. In 2000, Basic Blue saved $16 million while Sales Compass saved $21 million. In 2001, IBM saved $200 million and its cost of training per-employee reduced significantly – from $400 to $135. E-learning also resulted in a deeper understanding of the learning content by the managers. It also enabled the managers to complete their classroom training modules in lesser time, as compared to the traditional training methods used earlier. The simulation modules and collaboration techniques created a richer learning environment. The e-learning projects also enabled the company to leverage corporate internal knowledge as most of the content they carried came from the internal content experts.

IBM’s cost savings through E-Learning

Basic Blue16.0
Going global0.6
Coaching simulators0.8
Manager Quick-Views6.6
Customer-Room0.5
Sales Compass21.0

IBM continued its efforts to improve the visual information in all its e-Learning programs to make them more effective. The company also encouraged its other employees to attend these e-learning programs. Apart from this, IBM planned to update these programs on a continuous basis, using feedback from its new and experienced managers, its sales force and other employees.

IBM used e-Learning not only to train its employees, but also in other HR activities. In November 2001, IBM employees received the benefits enrollment material online. The employees could learn about the merits of various benefits and the criteria for availing these benefits, such as cost, coverage, customer service or performance ­using an Intranet tool called ‘Path Finder.’ This tool also enabled the employees to know about the various health plans offered by IBM. Besides, Pathfinder took information from the employees and returned a preferred plan with ranks and graphs. This application enabled employees to see and manage their benefits, deductions in their salaries, career changes and more. This obviously, increased employee satisfaction. The company also automated its hiring process. The new tool on the company’s intranet was capable of carrying out most of the employee hiring processes. Initially, IBM used to take ten days to find a temporary engineer or consultant. Now, the company was able to find such an employee in three days.

IBM Change Management Case Study

Change is a constant in the business world, and organizations that can effectively manage change are more likely to succeed. 

Change management is the process of planning, implementing, and controlling change within an organization to minimize negative impacts and maximize benefits. 

One company that has successfully implemented change management is IBM.

With a history spanning over a century, IBM has undergone significant changes over the years, including the implementation of change management to ensure a smooth transition. 

In this blog post, we will take a closer look at IBM’s change management case study, examining its background, change management strategy, and results. 

Brief History and Growth of IBM 

IBM, also known as International Business Machines Corporation, is an American multinational technology company that was founded in 1911. 

The company was initially formed as the Computing-Tabulating-Recording Company (CTR) through the merger of four separate companies: the Tabulating Machine Company, the Computing Scale Company, the International Time Recording Company, and the Bundy Manufacturing Company. 

In 1924, the company was renamed International Business Machines Corporation (IBM). IBM’s early products included tabulating machines, time clocks, and punched card equipment, which were used for data processing and information management. 

Over the years, IBM has evolved into a leading provider of enterprise technology solutions, including hardware, software, and services, serving clients in over 170 countries around the world.

IBM experienced significant growth in the mid-20th century, as it became a leading provider of computers and data processing equipment. 

In the 1950s, IBM introduced its first electronic computer, the IBM 701, which was followed by a series of other computer models that became increasingly advanced and sophisticated. 

IBM also played a key role in the development of the personal computer, releasing its first PC in 1981, which quickly became a standard in the industry. 

In the 1990s and early 2000s, IBM shifted its focus to software and services, becoming a leader in areas such as cloud computing, artificial intelligence, and cybersecurity. 

Today, IBM is a major player in the technology industry, with a global workforce of over 350,000 employees and revenue exceeding $70 billion in 2020.

Key drivers of change for IBM  

There were three dominant factors that created a need for IBM to implement effective change management processes to successfully navigate the challenges and opportunities it faced.

1. Technological advancement 

Technological advancements have been a key driver of change in the technology industry, and IBM was no exception. In the 1980s and 1990s, IBM faced significant disruption as the market shifted from mainframe computers to personal computers, which were smaller, cheaper, and more accessible to individuals and small businesses. 

This shift threatened IBM’s dominance in the computer industry, as it had built its reputation on large-scale mainframe computers. To adapt to this changing market, IBM had to shift its focus to services and software, invest in research and development to create new technologies and innovations, and develop new partnerships and alliances to expand its offerings. 

Additionally, the emergence of cloud computing and artificial intelligence in the 2000s and 2010s further pushed IBM to adapt and innovate to stay ahead of the competition. These technological advancements required IBM to adopt a more agile and flexible approach to business, with a greater focus on innovation, speed, and collaboration.

2. Globalization 

As IBM expanded its operations globally, it faced a range of challenges related to cultural and regulatory differences across different countries and regions. In order to effectively navigate these differences, IBM had to develop a more flexible and adaptable approach to business, one that was able to respond to local market conditions and customer needs while also maintaining a consistent global brand and corporate identity. 

This required IBM to invest in building a diverse and multicultural workforce, to establish strong local partnerships and alliances, and to develop a deep understanding of local cultures, languages, and customs. 

Additionally, IBM had to comply with local regulations and laws in each country it operated in, which often required significant resources and expertise to navigate. By embracing globalization and developing a more flexible and adaptable approach to business, IBM was able to successfully expand its operations globally and establish a strong global presence.

3. Market competition 

IBM faced intense competition from emerging tech companies in the 1990s, particularly in the areas of personal computing and software development. 

Companies like Microsoft and Intel were challenging IBM’s dominance in the industry, and IBM had to adapt quickly to remain competitive. 

To address this challenge, IBM shifted its focus to services and software, investing heavily in research and development to create new products and innovations that could compete with emerging technologies. 

IBM also streamlined its operations to improve efficiency and reduce costs, while exploring new markets and opportunities for growth. 

This required IBM to be more agile and responsive to market conditions, and to take calculated risks in pursuing new ventures and partnerships. Ultimately, these efforts enabled IBM to remain a major player in the technology industry and to continue innovating and expanding its offerings.

Change management strategy of IBM 

IBM responded to these three drivers of change in several ways, as explained below:

1. Technological advancements

To adapt to rapid technological advancements, IBM invested heavily in research and development to create new products and innovations. It also embraced emerging technologies such as cloud computing and artificial intelligence and developed new partnerships and alliances to expand its offerings.

IBM also shifted its focus to services and software, which helped it to stay competitive as the market shifted away from mainframe computers. Additionally, IBM adopted a more agile and flexible approach to business to enable it to respond quickly to changing market conditions and customer needs.

2. Globalization

To effectively navigate different cultural and regulatory environments, IBM invested in building a diverse and multicultural workforce, established strong local partnerships and alliances, and developed a deep understanding of local cultures, languages, and customs.

IBM also complied with local regulations and laws in each country it operated in, which required significant resources and expertise to navigate. Additionally, IBM developed a consistent global brand and corporate identity while also maintaining the flexibility to respond to local market conditions and customer needs.

3. Market competition

To remain competitive in the face of intense market competition, IBM explored new markets and product offerings while streamlining its operations to improve efficiency and reduce costs. IBM also invested heavily in research and development to create new products and innovations that could compete with emerging technologies.

IBM adopted a more agile and responsive approach to business, which enabled it to take calculated risks in pursuing new ventures and partnerships. Additionally, IBM developed a culture of innovation and collaboration to foster creativity and agility, which helped it to stay ahead of the competition.

Positive outcomes and results of IBM successful change management implementation

IBM’s successful implementation of change management led to several positive outcomes and results, including:

Increased profitability: IBM’s shift to services and software helped to increase its profitability by creating new revenue streams and reducing costs. By focusing on high-margin businesses such as consulting and software development, IBM was able to improve its financial performance and profitability.

Improved competitiveness: IBM’s investments in research and development, partnerships, and new markets helped it to remain competitive in the face of rapid technological advancements and intense market competition. By adopting an agile and responsive approach to business, IBM was able to adapt quickly to changing market conditions and customer needs, which helped it to stay ahead of the competition.

Enhanced customer satisfaction: IBM’s focus on innovation, collaboration, and customer service helped to enhance customer satisfaction and loyalty. By developing new products and services that met customer needs and expectations, and by providing excellent customer service and support, IBM was able to build strong relationships with its customers and earn their trust and loyalty.

Increased employee engagement and retention: IBM’s culture of innovation, collaboration, and diversity helped to increase employee engagement and retention. By fostering a culture of creativity and agility, and by valuing and supporting its employees, IBM was able to attract and retain top talent, which helped it to drive innovation and growth.

Strong brand reputation: IBM’s successful implementation of change management helped to strengthen its brand reputation and identity. By maintaining a consistent global brand while also remaining flexible and responsive to local market conditions and customer needs, IBM was able to build a strong and respected brand reputation that is recognized around the world.

Final Words 

IBM’s successful implementation of change management serves as a powerful case study for businesses facing rapid technological advancements, intense market competition, and globalization. By adopting an agile and responsive approach to business, investing in research and development, exploring new markets and partnerships, and fostering a culture of innovation and collaboration, IBM was able to remain competitive and relevant in the technology industry. 

About The Author

' src=

Tahir Abbas

Related posts.

Key performance indicators for housekeeping

Key Performance Indicators for Housekeeping – Explained with Examples

Nudge Theory in Healthcare

The Promising Potential of Nudge Theory in Healthcare

Important Steps in Crisis Communication

12 Important Steps in Crisis Communication

Performance Management Case Study

In collaboration with mckinsey & company.

MIT Sloan Management Review Logo

Rebooting Work for a Digital Era

How ibm reimagined talent and performance management, february 19, 2019, by: david kiron and barbara spindel, introduction.

In 2015, IBM was in the midst of a tremendous business transformation. Its revenue model had been disrupted by new technology and was shifting toward artificial intelligence and hybrid cloud services. To increase its rate and pace of innovation, the company was rapidly changing its approach to getting work done. New, agile ways of working together with new workforce skills were required to accomplish its portfolio shift. But standing in the way was an outdated performance management (PM) system employees did not trust. Diane Gherson, chief human resources officer and senior vice president of human resources, recognized that IBM’s approach to performance management would need to be entirely reimagined before the organization could fully engage its people in the business transformation.

Gherson says the performance management system then in place followed a traditional approach, one that revolved around a yearlong cycle and relied on ratings and annual reviews. “You’d write in all your goals at the beginning of the year, and at the end of the year, your manager would give you feedback and write a short blurb and then give you your rating,” she says.

IBM’s approach to performance management would need to be entirely reimagined before the organization could fully engage its people in the business transformation.

That approach was “holding us back,” Gherson says. “The massive transformation meant we were shifting pretty dramatically into new spaces and doing work really differently. Whereas efficiency was very important in the prior business model, innovation and speed had become really important in the new business model. And when you’re trying to make that kind of a fundamental shift, it’s important, obviously, to bring your employees along with you.”

Gherson knew from employee roundtables and surveys that IBMers didn’t have confidence or trust in the existing PM system. This view was at odds with the views of other senior leaders, who felt the system in place was working well from their perspective.

It took Gherson more than a year to convince her peers in senior leadership that IBM’s digital transformation would not succeed without higher levels of employee engagement, and that meant focusing on the existing PM system. Eventually she won them over. As for the traditional PM system that was holding the company back? “We threw all that out,” Gherson says. “We kept our principle of cultivating a high-performance culture, but pretty much everything else changed.”

Company Background

2015 was hardly the first time the company had found itself in the midst of a fundamental shift. IBM has had to reinvent itself time and again to remain relevant. Founded in 1911 as machinery manufacturer Computing-Tabulating-Recording Co., IBM (International Business Machines) over the decades has repeatedly adjusted its business focus — from early data processing to PC hardware to services to software systems — in response to evolving markets and competitive pressures.

Today, IBM, headquartered in Armonk, New York, employs about 360,000 people in 170 countries. After 22 consecutive quarters of declining revenue, the company reversed the trend in the fourth quarter of 2017 and subsequently has shown revenue growth. Growth in its cloud, artificial intelligence, cybersecurity services, and blockchain units have contributed to the turnaround, with about half of its revenues now derived from new business areas. Indeed, these days, IBM is betting big on AI and hybrid cloud, recently announcing plans to acquire open-source software pioneer Red Hat, an innovator of hybrid cloud technology, for $34 billion. With that notable acquisition, the company is making a bold bid to compete against heavyweights like Google, Amazon, and Microsoft in the cloud services market.

The new strategic direction has necessitated a change in how IBM’s talent is managed and how the work of the digital enterprise is done. “In a classic, traditional model, a manager will oversee the work of an employee and, therefore, have firsthand knowledge of how they’re doing,” Gherson observes. “That traditional model is long gone in most companies. Work is more fluid.”

At IBM, work is being done differently in three fundamental ways. One is a stronger emphasis on project work: Individuals move around the organization to work on various projects and initiatives, joining teams for short stints before moving on to new teams to tackle new challenges. Two, the entire concept of performance is shifting from primarily emphasizing performance outcomes to a model that also emphasizes the “how,” including the continuous development and application of new skills to keep up with the exponential rate of change in technology. Finally, with the adoption of agile ways of working, continuous feedback becomes a critical part of workflow. The new PM system needed to abandon the concept of an annual feedback event and find a way to reinforce a culture of feedback ― up, down, and across.

Meanwhile, digital transformation in the economy at large is exerting pressure on IBM as the tech giant strives to maintain an edge over its competitors. As a result of these internal and external changes, the company has seen the need to prioritize not only innovation and agility but also the continual development of employee skills, since what it requires of its talent base has also changed, with the need to continually develop employee skills becoming paramount.

Test-Driving a New System

The company’s key decision was to crowdsource its new performance management system rather than impose something top-down on its workforce, which was not consistent with agile methodologies or design thinking. Gherson says it was “really important to have employees feel like they were stakeholders in the new design, not just bystanders or consumers of it.” To that end, IBM undertook a process for designing the system that was a radical departure from the past. “There were many skeptics initially,” Gherson recalls, highlighting the challenges of the project. IBM relied heavily on enterprise design thinking, creating a minimum viable product (MVP), and invited the workforce to test it and offer feedback. Gherson likens the process to “giving people a concept car that they can drive and kick the tires as opposed to asking them what they would like to have in a car.” The rollout was fast: The September 2015 launch of the MVP happened within a couple of months of the first design-thinking session.

While many employees were thrilled that the traditional approach to performance management was on its way out the door, most were skeptical that the replacement program would be an improvement. As Joanna Daly, IBM’s vice president of global talent, recalls, “Employees actually said to us, ‘We don’t believe that you want our input. We think you already know what you’re going to do, and you’re just sort of pretending to ask for our input.’ We had to figure out how to prove to employees that we were authentic and serious in wanting them to shape this.”

Changes to IBM’s Performance Management

case study on ibm

HR did so in a simple way: by asking employees what they wanted, giving their responses due consideration, and playing back what it was hearing. “We asked, ‘What do you want to get out of our approach to performance?’” Daly says. “And the answer we got was they wanted richer feedback. And they hated being defined by a single assessment rating.”

When Gherson blogged about the new system on the company’s internal platform, her first entry was viewed by 75,000 IBMers within hours, with 18,000 responding with detailed suggestions. The company used its proprietary Watson text analytics to sort through what employees wrote, enabling Gherson to put out a second blog within 48 hours enumerating which elements employees liked and which they disliked. The company proceeded through numerous iterations and playbacks, with employees continuously participating in the design process. Management even reached out personally to the most vocal critics at every step, directly engaging their input in producing the next prototype. The eventual result ― officially launched in February 2016 and called Checkpoint ― was aligned to the employees’ input, providing a PM system focused more on feedback and less on assessment. (See “Changes to IBM’s Performance Management” for key differences between the old and new system.)

The eventual result was aligned to the employees’ input, providing a performance management system focused more on feedback and less on assessment.

Rather than receiving a single rating at an annual review, employees now have more frequent check-ins with managers. Through the company’s mobile ACE (appreciation, coaching, and evaluation) app, they also can seek feedback from peers, managers, or employees they manage.

The new and more agile system allows IBMers to revise their goals throughout the year. In response to crowdsourced input during the design process, employees are assessed according to their business results, impact on client success, innovation, personal responsibility to others, and skills. Managers are held accountable through pulse and mini-pulse surveys of the people they oversee, with poor results leading to training or, in some cases, removal from management.

Checkpoint is a far cry from the previous stand-alone HR program that rated and ranked employees. It’s aligned to the critical factors for IBM’s success and designed to ensure that the company achieves advantage with its talent in a fast-moving competitive landscape.

Checkpoint has been a major contributor to employee engagement, which has increased by 20% since IBM deployed the revitalized performance management system. In fact, in the company’s annual engagement pulse survey, employees pointed to Checkpoint as the change that made the biggest difference in their experience at IBM.

Focus on Learning and Growing

Technological change ― in the marketplace and in IBM’s business focus ― is driving an unremitting need for new skills, making their development an essential part of IBM’s corporate strategy. “In today’s world, skills are actually more important than jobs,” Gherson declares. “In order to reinvent our company, we need everyone to reinvent their skills on a continuous basis. You can’t hire someone because they have a particular skill. You have to hire someone because they have the capacity to continue to learn.” To that end, in addition to the new approach to performance management, talent management at IBM now includes a personalized learning platform and a personalized digital career adviser.

The platforms use data to infer which skills employees have and connect them with learning to build those skills that are increasingly in demand. The personalized program is “really accessible, very consumer-friendly,” Gherson says. “It has everything: internal and external courses, Harvard Business Review articles, MIT Sloan Management Review articles, YouTube videos ― you name it. And it serves it up for you as an individual, based on your unique role. It will say, ‘Given what you’ve taken so far and your career goals, here are some recommendations and here’s what people like you have taken and how they’ve rated it.’”

To encourage career mobility, IBM launched a digital coach for employees wishing to advance their careers within the company. My Career Advisor (known commercially as Watson Career Coach) was created by employees during a company-wide hackathon. It features a virtual assistant that uses data to provide personalized career counseling, such as average time to promotion from an employee’s current role and career steps taken by others to acquire the job a user might want. Another related platform, Blue Matching, serves IBM employees internal job opportunities tailored to their qualifications and aspirations, inferred from their CVs.

What enables these learning and career programs, says Daly, is “having more data available and having better insights to guide the user. These new digital platforms mean we can get these insights directly into the hands of employees and their managers.” Also essential has been uniting these platforms. “It’s not about having a learning platform and having separately an internal jobs platform,” Daly notes. “It’s how do we integrate these two together with AI-enabled advice for employees to explore? What kind of job should I do next? What are my skills gaps if I want to pursue that job, and then what learning would I take to close that gap?”

Real-Time Insights

The new PM system was about agility and prioritizing feedback over assessment. IBM elected to go further and figure out how to use all the insights it was developing from its analytics and AI capabilities to ensure that useful insights could readily emerge and be accessible to both HR and the workforce.

More predictive and prescriptive insights will be transmitted directly to managers and employees at the moment they’re needed most, embedded in the workflow.

“Thanks to these digital experiences, we’ve modernized how to deliver insights to our workforce and management ― right when they need it,” Daly says. She cites compensation decisions as an example. Using machine learning, “we advise managers about which employees should get the highest salary increase. We arrive at the recommendation using dozens of internal and external data sources. This helps with more transparent conversations between the manager and her employee,” she says. “We give managers talent alerts directly on their personalized dashboard. For example, the system might observe, ‘Hey, your team member has been in her band level for a few years and is a good performer and is building her skills. Have you thought about promoting her?’”

Going forward, Daly anticipates that more predictive and prescriptive insights will be transmitted directly to managers and employees at the moment they’re needed most, embedded in the workflow.

Preventing Attrition

“In our industry, talent is the No. 1 issue,” Gherson contends. “And so, it’s really important that we attract and develop and continue to upgrade our skills and retain talent if we’re going to win in this market.” Despite more than 7,000 job applicants coming into IBM every day, with a tech talent shortage and ongoing talent wars in AI and cybersecurity, retention becomes particularly crucial; experts agree that in the coming decades, there won’t be enough qualified people to fill available jobs.

Gherson and her team received a patent for their predictive attrition program, which was developed at IBM using Watson AI algorithms to predict which employees were likely flight risks. Most managers were initially skeptical at the notion that algorithms could have more insight into their employees’ intentions than they did — until the algorithm consistently made correct predictions. Then, Gherson recalls, “We started getting these little notes from managers saying, ‘How did you know?’”

Significantly, the technology is about prescription in addition to prediction. “We reach out to you as a manager,” Gherson explains, “and we tell you that you’ve got someone who is at high risk to leave and here are the actions we recommend you take.” Because the AI is able to infer which skills individual employees possess, it can then recommend actions for managers to implement — often related to furthering skills development — to prevent them from leaving. By helping their employees develop new skills, managers bolster employee engagement and increase job satisfaction, advantages in a talent-scarce market environment. “The attrition rate of the people we touch with this program is minuscule compared to the control group,” Gherson says, noting the improvement in employee retention has already saved IBM nearly $300 million.

The Evolving Role of HR

Given the heightened significance of talent, HR, as the function primarily responsible for talent, has a revitalized role to play in executing corporate strategy and driving value at IBM.

To achieve a more central role in value creation, IBM’s HR function had to be freed from the tasks that traditionally consumed so much of its managers’ time. “People have a million questions: ‘When do I have to sign up for my 401(k)?’ ‘What’s the deadline for the health benefits program enrollment?’ These are all findable pieces of data, but actually finding them has always been the hardest part,” Gherson says. “I wouldn’t say that’s the highest value that HR could provide, but it’s a lot of what HR has been doing. Maybe in some companies that’s all HR does. But that’s not the purpose of HR. You don’t need HR to answer those questions. You just need really great bots and virtual assistants.”

Here, the company again exploited its own capabilities in AI and analytics. In HR alone, IBM currently deploys 15 virtual assistants and chatbots, and the company is diligent about measuring both employees’ experience and the effectiveness of the bots in responding to questions. With the bots taking on routine tasks previously performed by people, IBM’s HR function can devote itself to what Gherson sees as its real purpose: “to create competitive advantage with your talent and improve the employee experience.”

Of course, technology and data are vital not just in freeing up the humans on the HR team but also in optimizing their performance. “For too long, HR people have relied on just being highly intuitive: ‘I think this person’s going to be a good fit for the job’ or ‘I think a two-year assignment is the right length,’ or whatever,” Gherson observes.

“And actually, you can employ science-based methods to come up with an estimate ― for example, there’s an 80% chance they’ll fail in this job because they lack these capabilities or there’s a 50% chance that you’ll get no return on your investment in that international assignment because it’s too short,” she says. “So, we should be able to give much better advice to the people that we support.”

Gherson acknowledges that working this way also requires culture change within the HR function, which demands different skills like data science and different job roles to fully realize the disruption. She has invested in a robust re-skilling education program for her team of HR professionals.

Gherson says HR can’t simply stop at using technology to detect patterns. Giving managers data on, say, turnover rate, without also offering guidance on how to use that information, leaves them to rely once again on intuition to solve problems. As with the predictive attrition program, IBM pairs reporting data with recommendations for action.

“Technology enables us to not just report, but to then say, ‘If you keep doing what you’re doing, here’s what the picture will look like a month from now, a year from now. Your cost of labor will be higher than your competitors by 12% if you carry on hiring at the rate you’re hiring. So here’s a prediction that’s going to be a bit of a wake-up call for you. But if you take these actions, here’s the impact,’” Gherson explains.

“We’re going from intuitive to reporting to predicting to prescribing,” she adds. “And if we can take it all the way to that level, then we’re really adding value. We’re very proud of the fact that through these talent programs, HR delivered more than $107 million in benefits in the last year.”

IBM’s efforts to modernize its performance management system are part of an ongoing process. “We will continue to refine the measurement and expectations of skills growth in IBM as it becomes clear that we need to become a fabulous re-skilling-at-scale machine and hold ourselves accountable to that,” Gherson says. Daly echoes that point: “These aren’t programs that HR is developing. This is a new way of working that all IBMers are developing together so that we can keep our skills up to date as things keep changing in the future.”

case study on ibm

Joanna Daly

vice president, global talent

Joanna Daly is IBM’s vice president of talent with global responsibility for talent acquisition, people analytics, AI strategy for HR, employee experience, performance management, and careers and skills. Her previous roles at IBM have included leading compensation and running HR for the company’s global industry platforms, blockchain businesses, and European business services operations, with stints in Singapore and India as well. Joanna is a frequent speaker at conferences and in the media on AI in HR, diversity and inclusion, skills, and the future of work.

case study on ibm

Diane Gherson

chief human resources officer and senior vice president, human resources

As chief human resources officer and senior vice president of human resources, Diane Gherson is responsible for the people strategy, leadership, skills, careers, engagement, employee services, labor cost, and diversity and inclusion of IBM’s 360,000-person global workforce. During her tenure as CHRO, as IBM has dramatically shifted its business portfolio, Gherson has redesigned all aspects of the company’s people agenda and management systems to shape a culture of continuous learning, innovation, and agility. At the same time, she has digitally transformed the HR function, incorporating AI and automation across all offerings, resulting in more than $100 million in net benefits in the past year.

HR Transformation as the Engine for Business Renewal

Commentary by Anna A. Tavis

Industry disruptions have headlined business news since the early 2000s. With the cloud revolution driving change in global markets, traditional built-to-last companies have had to rapidly transform themselves to survive, adapt, and compete. Market-facing customer service, sales, and marketing functions reinvented themselves in the new digital image a decade ago. Although HR is a latecomer to the digital scene, it stands ready to undergo its own reinvention armed with smart technology, data-driven insights, and a renewed sense of purpose.

To paraphrase Diane Gherson, IBM’s chief human resources officer, talent is unquestionably the new economy’s number one competitive asset. HR, as a traditional caretaker of talent, has to leapfrog generations of evolution, moving from intuition to reporting to predicting and ultimately to prescribing — all in a matter of a few years. Some companies, like IBM, are successfully making this leap. Critics, so ready to question HR’s relevance and viability, should take note.

This case study describes HR transformation at IBM. It is particularly instructive for companies embarking on their own HR digital transformation efforts. IBM’s most important lessons are less about the specific solutions they introduced and more about the way they went about finding their new philosophy and their new operating model. The IBM story is as much about what they decided not to do as it is about what they ended up doing.

Diane Gherson’s most consequential first step was to abandon the practice of benchmarking other companies and not to rely on HR experts to renew her strategy. She turned instead to IBM’s own employees for answers. Not surprisingly, the message her team heard from employees was not always in line with the view of senior management, which did not believe much in HR could change. It became clear that IBM’s transformation was to be anchored in agile ways of working. The company’s traditional performance management (PM) was seen by employees, however, as an administrative chain holding back the adoption of fast agile ways of working. The decision was made to radically redesign PM with employee experience in mind. In the process, all other functional areas in HR were redesigned and realigned to serve HR’s new purpose.

The following 10 decision points are worth considering when reviewing the IBM case in the context of your own organizational transformation:

1. Decide where to start. IBM’s first and highest priority was to redesign its PM system. The team turned to its own employees for redesign ideas, not HR experts or senior leaders.

What you can do: Identify the weakest link in your talent management system. If it’s your PM approach, this is where change should begin.

2. Connect your transformation to an existing element of your strategy. IBM used its adoption of agile practices across the organization as the primary catalyst to overhaul its entire talent management system.

What you can do: Choose the one key performance indicator (KPI) that intersects with talent that will have the most impact on your business.

3. Renew your talent/HR purpose. By committing to employee experience, engagement, and learning, IBM shifted away from an earlier focus on differentiation and high potentials.

What you can do: Decide what type of culture you want to have. Assess how fast you can move from an administrative compliance- and appraisal-based approach to being employee-centric and learning-focused.

4. Decide where to start: Identify the most consequential first step with the broadest possible impact. IBM made a PM redesign the priority in its talent transformation process and had the capacity, capability, and political capital to go global with its minimum viable product (MVP) for the entire organization.

What you can do: Identify whether performance management is the weakest link in your talent management system and where the pain points are for your employees and management.

5. Select the design method consistent with your new purpose. For IBM, agility and design thinking became key methodologies HR successfully applied.

What you can do: Select and agree on design principles and method(s) consistent with your talent philosophy and aligned with your purpose. Teach those skills and test to see if they work for all.

6. Get your organization’s buy-in to support your transformation effort. IBM took a two-tiered approach to secure buy-in: (1) It earned employee trust and engagement by crowdsourcing design ideas from across the company. (2) It won over senior management by running successful experiments proving that attrition could be predicted by data.

What you can do: Learn to listen. Generate insights and communicate decisions supported by the evidence you collect. Engage key stakeholder groups with data relevant to them.

7. Decide how to test and improve the designed product. IBM went for speed, customer feedback, and continuous improvement. Having designed and released their crowdsourced PM process, Checkpoint, in record time, the company “proceeded through numerous iterations and playbacks, with employees continuously participating in the design process.”

What you can do:

Choose one of three approaches:

  • Launch a company-wide MVP: Your priority initiative is based on your company’s KPIs and readiness for the company-wide rollout.
  • Experiment and create a proof of concept. Run a series of experiments starting with the business units most ready to innovate. Show results to others.
  • A combination of the above two approaches.

8. Go beyond performance management: Decide on your next steps. Successful implementation of the redesigned PM process revealed further strategic talent needs for IBM:

  • Accelerate and personalize skills renewal.
  • Customize decision support for managers.
  • Create an internal marketplace for jobs.

What you can do: PM renewal has a domino effect on all HR processes and tools. What comes next on the renewal list will have to be decided by your company depending on its strategic priorities. Meanwhile, HR will have to renew and upskill itself as the transformation process continues.

9. Assess how to turn technology and data into the greatest enablers of transformation. IBM HR fully leverages its tech and AI capabilities, often creating its own tech tools. My Career Advisor, for example, is IBM’s mobile in-house career coach created by employees at a company-wide hackathon. Blue Matching serves IBM employees with notice of new internal job opportunities tailored to their qualifications and aspirations.

What you can do: Technology and automation are central to the transformation of HR. Yet, no two companies’ technological and data capabilities are alike. Choose your tools wisely, develop technical expertise internally, or borrow your experts. Do not overspend on systems unless you understand how they will deliver.

10. Integrate tools, platforms, and processes with employee experience in mind. IBM’s case shows how to bring all processes, tools, and platforms together into one renewed talent ecosystem. “It’s not about having a learning platform and…an internal jobs platform,” noted Joanna Daly, IBM's vice president of global talent; it’s how they integrate together with AI-enabled advice for employees to explore what jobs they should do next.

What you can do: No matter where you decide to start, integration should be your final destination.

IBM’s case could be the timely accelerator of your own company’s HR transformation. There is a lot to learn here, but no one’s “best practice” is a replacement for your own discovery. The best lesson to learn from Diane Gherson and her team is their innovative attitude and openness to experiment in the face of the unknown. Learning to innovate, take on risks, and show courage is what IBM’s HR has shown us how to do. It is now the right time to take the right lessons from IBM and apply them and scale. Best of luck as you begin.

Anna A. Tavis is a clinical associate professor of human capital management and academic director of the human capital management program at New York University. She tweets @annatavis .

About the Authors

David Kiron is the executive editor of MIT Sloan Management Review ’s Big Ideas Initiative, which brings ideas from the world of thinkers to the executives and managers who use them.

Barbara Spindel is a writer and editor specializing in culture, history, and politics. She holds a Ph.D. in American studies.

Contributors

Carrie Altieri, Michael Fitzgerald, Jennifer Martin, Allison Ryder, Karina van Berkum

Acknowledgments

Joanna Daly, vice president, global talent, IBM

Diane Gherson, chief human resources officer and senior vice president, human resources, IBM

More Like This

Add a comment cancel reply.

You must sign in to post a comment. First time here? Sign up for a free account : Comment on articles and get access to many more articles.

Logo

How IBM Became A Multinational Giant Through Multiple Business Transformations

Table of contents, here’s what you’ll learn from ibm’s strategy study:.

  • How an accurate diagnosis of your organization’s most pressing challenge can help you form a coherent strategy to overcome it.
  • How developing your strategic instinct to recognize change early and decisively transforming your business rely on unifying your organization towards a single direction.
  • How focusing on short-term financial gains is putting your long-term survival and profitability in jeopardy.

IBM stands for International Business Machines Corporation and is a multinational technology corporation with over 100 years of history and multiple inventions that are prevalent today. Its headquarters are in Armonk, New York, but it operates in over 170 countries.

Institutional investors own over 55% of IBM, while around 30% belongs to mutual funds, and individual investors own less than 1%. Since 2020, IBM’s current Chairman and CEO is Arvind Krishna.

IBM’s market share and key statistics:

  • Total assets of $28.999B as of September 30, 2022
  • Revenue of $57.35B billion in 2021
  • Total number of employees in 2022: 345,000
  • Brand value of $ 96,992B in 2021
  • Market Capitalization of over $130B as of December 2022

File:IBM Southbank building against March sky.jpg

Humble beginnings: How did IBM start?

IBM was founded in 1911 as the Computing-Tabulating-Recording Company (CTR) in Endicott, New York, United States.

CTR was the product of three and a half amalgamated companies:

  • The Tabulating Machine Company
  • The International Time Recording Company
  • The Computing Scale Company
  • The Bundy Manufacturing Company

The company’s founding was very well-timed. It coincides with the profound shift of the United States’ agricultural economy to an industrial one. At that time, inventions and innovations were introduced at an unprecedented rate, evolving people’s way of life and defining our modern lifestyle.

File:Hollerith census machine.CHM.jpg

The company’s success at these times was due to the wide range of products it offered that were in high demand in industrializing economies from time recording clocks and commercial scales to various mechanical data handling systems, tabulating machines.

But it was CTR’s corporate culture and managerial practices that enabled it to pioneer and serve that demand.

IBM’s scammy and problematic birth

The birth of IBM was the result of the vision and leadership of CTR's first president, Charles Ranlett Flint.

Charles Ranlett Flint sketch

Flint was notorious for combining companies, creating monopolies, and having other people manage them while he simply owned stocks. That’s what he intended to do with the creation of CTR as well.

Here are the companies that formed what later became IBM:

  • In 1900, Flint bought  The Bundy Manufacturing Company , the inventor of the “punch card” that allowed factories to convert working hours into salaries. The company was quite successful due to increased demand from emerging factories and the founder’s acute business skills. Flint merged the company with the International Time Recorder (ITR).
  • ITR  was the core of CTR. By the time Flint created CTR, ITR was already a business group and an established player in selling and maintaining time recorders with an international presence. Flint had bought out almost all of his competitors, effectively creating a mini-monopoly.
  • The next company that formed CTR was the  Computing Scale Company . A marginally profitable company that had created a commercial scale for small merchants like butchers and cheesemakers. It was part of Flint’s vision of mechanical data handling.
  • The fourth and last company that formed CTR was  The Tabulating Machine Company . The main product of the company was the punch card tabulating equipment that automated parts of a manual and very labor-intensive process. It sped up “data entry,” increased accuracy, and reduced costs dramatically. It was Herman Hollerith’s invention, the second founder of CTR.

The merging of these three and a half companies didn’t make “business sense” at the time, nor was it the result of a careful business strategy.

At least not in the way we mean it today. It was a technical scheme that would allow Flint to protect his investment even if one of the companies wasn’t profitable and he had to sell it. Because, as it turns out, ITR was prosperous, and the tabulating business was slowly growing even though it required huge capital reinvestments. But the computing part of CTR was dying.

As a result, the child of this amalgamation was overvalued by twice its actual value.

This inflated value was supported by a loose argument of “economies of scale” since all these businesses were “measuring stuff.” From its very first days:

  • The stock was overvalued
  • The company was heavily in debt
  • There were a lot of internal clashes
  • The three businesses had no synergy
  • There was little attention to innovation
  • The board of directors only cared about profits
  • The customer and employee treatment was poor

In short, IBM was born with some of the worst conditions for any company.

IBM’s coherent business strategy that got it out of the pit

Just three years after its creation, in 1914, the company changed its culture, executive team, and product line.

In ten years, it went through an astounding business transformation.

The move that initiated this transformation was the hiring of Thomas J. Watson Sr. as general manager for the company. The previous leader of the company was simply a credibility mark that Flint had implanted to draw investors. Watson’s influence on the company, however, is so monumental that he is considered the third founder of CTR, who shaped it into IBM.

IBM President Thomas J. Watson 1920s

Watson carried out a series of initiatives that laid the foundation for what would later become America’s largest technology company of the previous century:

  • He built a mighty salesforce and a training program called “Sales School” that every salesperson had to graduate from.
  • He brought clarity of purpose with frequent communication of goals and performance measures.
  • He aligned daily actions with measurable targets that were part of the company’s strategy. He effectively created a culture of execution.
  • He created a new line of products in the data-processing industry.
  • He implemented initiatives to bring people from the three different divisions close together.
  • He improved efficiencies by bringing product developers and manufacturing staff into the same building enabling cross-functional support and information exchange.
  • He cultivated a system of shared beliefs and practices that empowered employees to make decisions that were consistent with the company’s priorities.
  • He trained customers on how to use their products, gaining valuable feedback and ideas.

Watson spent half of his career as a valued employee of the National Cash Register Company (NCR), where he learned everything he knew about running a business. When he came to CTR, he brought all of his knowledge with him. It included: the development of the salesforce, budget and personnel practices, and even some executives that worked for his previous employer.

Watson was a highly motivated, optimistic, and conservative man of principle. During his tenure, CTR grew and consistently developed its product lines to cover a wide range of business machinery.

He took CTR from a scammy amalgamation to a respectable and healthy organization giving it a new name: International Business Machines (IBM).

Key Takeaway #1: Diagnose the challenge and tackle it with a coordinated strategy 

When Watson became general manager, the company was rotten. However, he was ambitious and driven. His approach transformed the firm and affected the company’s journey for many generations. It can be summarized into two distinct steps.

When faced with a crumbling organization:

  • Diagnose the most important challenge. Make an honest  swot analysis . Watson found that CTR had a dying division (weakness), a profitable one (strength), and a promising one (opportunity). He based his approach on these findings.
  • Devise a coherent and executable strategy. Create a strategic plan that addresses this challenge and coordinates resources. Watson applied all his expertise to developing a salesforce to seize the opportunity he found while internally reforming the company.

Watson practically transformed IBM’s culture into an improved copy of NCR’s culture. His experience fitted like a glove in IBM’s challenges. Some could argue that he happened to be a hammer who found its nail. Others that he found a nail and shaped himself into a hammer.

Whatever is true, his results were undeniable.

IBM’s Golden Period: the strategy and tactics IBM used to penetrate the computer industry

IBM went through the Great Depression and came out of it stronger, wealthier, and healthier.

It also went through World War II, which gave the company an explosive push that was hard to maintain once the war ended. IBM’s activities during WWII were plentiful and… complicated. One thing is certain, among the most important initiatives of Watson was the financial support of every IBMer’s family who went to join the fight and the promise that once the war was over, they would regain their job in the company.

As a result, once WWII ended, the company had more than 25% increased workforce at its disposal while a huge part of its revenue-generating business vanished nearly overnight: the military contracts.

Here’s how IBM faced these new challenges.

IBM System 360 Model 30 central processor unit (CPU)

IBM’s corporate strategy against an increased workforce and a vanished revenue stream

Watson recognized the problem from the beginning. His strategy may have been simple, but the flawless execution made all the difference since it wasn’t without obstacles.

The strategy had two key pillars, both focusing on technological advancement:

  • Improve, marginally, current products whose demand was still high. The strategic objective was to expand sales on those product lines to generate immediate cash and keep the business floating.
  • Invest in R&D of advanced electronics, a new technology that wasn’t fully understood nor ready to be commercialized. This was a necessary bet for the future of IBM.

It was obvious to Watson that the company should, one way or another, lead or at least ride a new wave of innovation and technological advancement. And that wasn’t possible with the company’s current product lines, internal structure, and culture.

IBMs Deep Blue, the first computer to win a match against a world champion.

The technological and business transformation that IBM went through was an undertaking that few high-tech companies have managed to pull off. Especially when so many stakeholders’ survival is dependent on the company’s well-being. Shareholders, banks that had provided loans, and employees were all highly incentivized to keep the status quo as is and fight against the transformation. “Since we’re selling, why change?” they thought.

This kind of resistance is typical when industry-reshaping technology emerges. Kodak went through the same but, unlike IBM, succumbed to stakeholder resistance, retained its status quo, and eventually died.

IBM’s strategic pivot faced a list of major challenges:

  • The best minds in advanced electronics were not working at IBM.
  • Advanced electronics was a relatively new industry that nobody could really understand or predict what problems it would solve and what use businesses would find in it.
  • New and strong players emerged while old rivals were still actively competing. Remington Rand was an old and active foe while researchers were leaving universities to start new companies and develop systems for the U.S. Army like ENIAC. The reason was that the US government was issuing funding programs investing millions in this new technology. Whoever demonstrated enough expertise and promise was winning the funding, conducting research, innovating, and reaping the benefits.
  • Sales resisted the new technology, clinging to its old and tested practices and propositions. In other words, sales and engineering weren't aligned. 

The tactics IBM implemented to overcome these challenges and not only survive but also transform as a business in a record time are numerous. Since we can’t really know every single one of them, we’ll go through some of the most important events and principles that enabled the company to devise the solutions it needed.

How IBM overcame the challenges of its strategic pivot

The event that marked IBM’s transformation and sealed its strategic pivot was Watson’s son, Tom, entering the business.

Tom was a bright and ambitious young man who, with the help of his father’s influence and a series of chance events, became IBM’s Executive Vice President at the age of 33.

Tom understood the emerging new technology, and so he led that part of the business. On the other hand, Watson didn’t understand how it worked, so he focused on the more familiar, traditional and still revenue-producing product lines. The two clashed regularly and intensely on many issues. But it’s important to mention that their arguments were never focused on whether IBM needed to transform and adopt advanced electronics. They agreed on that part. They clashed only on the cadence of the transformation and the policies they put in place.

This distinction is crucial because it reveals that the company wasn't divided at its core, the direction everybody moved was the same. The clash between the old and the new was extremely productive because:

  • The company started building critical mass in electronics by reinvesting earnings and rental cash flow. It didn’t rely on government funding, but rather it developed its capacity slowly and safely.
  • In order to catch up with the industry’s velocity with its bootstrapped approach, IBM’s advanced electronics department had to do things differently. So it cultivated a  culture of transparency  and  accountability  where information flowed freely.
  • The  604 Electronic Calculating Punch , the world's first mass-produced electronic calculator, was IBM’s first highly profitable product that came out of this approach.
  • The firm used its active customer network to understand customer needs and prioritize improvements on the data processing machines. Thus it created machines with validated demand.
  • The whole process enabled IBM to create “an infrastructure of knowledgeable customers, salesmen, and servicemen for electronic computers.”

As soon as the 1950s came, IBM entered the electronic computing market and became a highly competitive player. After that, it changed its strategy, took on larger computer projects, and became more dependent on federal funding to offset the associated risk.

It continued to accumulate knowledge and expertise, improving its processes and products.

Key Takeaway #2: Develop your strategic instinct and adapt fast

Develop your ability to recognize change and quickly transform your business to respond to it. To perform a successful business transformation, unite the organization towards a single direction.

If the need for change is clear at the top, it’s a matter of implementation and policies. It’s not easy, but it’s far more successful to lead a united organization than a two-headed one. So when you perform a business transformation:

  • Define the direction or destination as clearly as possible.
  • Align senior leadership with the desired direction.
  • Build guiding policies that take you from the old to the new. Don’t simply kill the old, transform it.
  • Treat the transformation as an idea worth spreading. Take advantage of your strengths and apply the  Law of Diffusion of Innovations .

The decline during the last decade and IBM’s enterprise strategy to return to the top

IBM slowly but surely started shifting its business model again in the late half of the past century and the following decades.

This time, the strategic pivot was more fundamental. The company shifted from “components to infrastructure to business value.” In other words, it shifted from manufacturing computers and new technologies to offering IT consulting and integration services.

This is reflected in its revenue percentages by segment. In 1980, 90% of IBM’s revenue was generated from hardware sales. By 2015, the company was generating over 60% of its revenue from services and less than 10% from hardware sales.

However, the journey wasn’t as smooth as in past transformations.

The challenges of the consulting industry that has left IBM behind in the last decade

The shift, this time, was taking place less effectively.

The company was selling fewer and fewer pieces of hardware each year while its revenues from consulting services weren’t increasing as fast. The company wasn’t investing as much in R&D, and it entered the new era of computing with an extreme focus on financials.

In 2006 and in 2010, the company's leadership announced “Roadmap 2010” and “Roadmap 2015,” respectively. These were financial goals that were mistakenly used as strategic guiding policies. And to the company’s detriment, they dictated decision-making on every level.

Here are key facts that indicate this extreme financial focus was a terrible strategy at the worst timing:

  • IBM’s new CEO, Virginia Marie “Ginni” Rometty, didn’t enjoy employee support. Due to her merciless tactics and her relentless focus to please the stakeholders, employee morale, and thus productivity, was at an all-time low.
  • Revenue was decreasing year over year.
  • “Rebalancing the workforce,” AKA layoffs, became a regular quarterly tactic to make the numbers.
  • Current and ex-IBMers were losing faith and becoming less and less content with leadership.
  • Despite the lack of growth, stocks continued rising, paying dividends and high Earnings Per Share (EPS). That was the result of “financial gimmicks” like massive stock buybacks or stashing assets and profits outside of the US to avoid taxes.
  • Extreme focus on cutting back costs. Using overseas “global delivery skills,” AKA cheaper workers and even docking 10% of salaries to offer training to employees while charging high-end prices for IBM’s services.

But you can only cut costs for so much and save that much. There is a limit to how much cost-cutting you can do until you hurt operations and production. And IBM reached that limit well before 2015, the year “Roadmap 2015” was promising $20 EPS.

By the end of 2014, the company had amassed a huge debt, its hardware profitability had taken a nosedive, its margins had declined, the executive leadership forwent their personal annual incentive payments for 2013, and it abandoned the Roadmap.

To save the company, leadership had to come up with a radically different strategy. And it did. The 5 “imperatives” strategy was much more attractive to all the stakeholders and would prove to be much more effective.

IBM’s 5 imperatives and its strategy to recovering its past glory

IBM’s biggest weaknesses in the first one-and-a-half decade of the current century have been financial performance and strategic blunders.

But if it had no strengths to leverage, then it wouldn’t be alive today. And its size is one of them. IBM is huge. For example, as of 2018, the company employed around 378.000 people and commanded one of the largest collections of PhDs in computer science and technology. IBM generates over $50 billion in revenue annually with consistently large profits. In 2017, the company had over $8 billion in cash.

The “five imperatives'' were a strategy that focuses on actual performance and not financial engineering to be successful.

The five imperatives were:

  • Cybersecurity
  • Cloud computing
  • Social networking
  • Mobile technologies

The company made several acquisitions to close the competitive gap in all of those focuses while it shifted resources to support those initiatives. As a result, it surpassed its competition with  analytics software  and its capabilities to manage and analyze massive bodies of data. With a $2 billion acquisition of SoftLayer, it caught up with  cloud computing  and extended its services to include  cybersecurity.  A partnership with Apple Computer offered the promise of  portable computing and app development  platforms lodged in cloud servers. Finally, IBM offered management consulting as much as software services through its  social networking  focus.

case study on ibm

IBM's Strategy has focused even more in recent years, integrating the imperatives into two major pillars: hybrid computing and Artificial Intelligence (AI).

It puts everything under the umbrella term: Digital Transformation.

IBM’s focus on digital transformation propels it into the future

IBM’s future looks promising, and its strategy is putting it back at the center of computers and technology. In January 2018, the company announced its first quarter of YoY revenue increase since 2012.

ai-powered-autonomous-labs

It focuses once again on delivering value to its customers by addressing the crucial challenges that accompany every digital transformation:

  • Managing the increased complexity of heterogeneous enterprise IT environments.
  • Extracting valuable insights from available data.
  • Sustaining operational competitiveness against disruptive market changes.
  • Increased cyber threats and increasing cost of cybersecurity.
  • A cohesive end-to-end execution of solutions that address all of these matters.

The way IBM addresses these challenges and chooses to differentiate itself is by adopting a platform-centric hybrid cloud approach paired with advanced AI capabilities. The infrastructure relies on Linux, containers, and Kubernetes as the architectural foundation.

In layman’s terms, the value proposition of the company is the sustainable and accelerating transformation of their client’s businesses and processes through:

  • Hybrid cloud that develops ability and speed.
  • Tailored and trustworthy data governance respecting privacy and generating data-driven business insights.
  • AI-driven decision-making that automates enterprise processes.
  • Consistency, security, and compliance.

The company is rapidly growing its ecosystem, enhancing client experience while driving value and innovation with its open-source technologies.

Key Takeaway #3: To succeed long term, focus on developing business capabilities instead of financial returns

Ambitious goals and financial promises are not strategies. They might provide some returns in the short term but ultimately set the company up for future failure. Cutting costs is not an infinite-returns-yielding tactic.

When the industry changes, new trends and technologies emerge, and your competitive advantage won’t be serving you for much longer:

  • Make a thorough analysis of the environment . Spot the most promising emerging trends in technology, customer expectations, and markets.
  • Perform an  internal analysis  to define your strengths, weaknesses, and current capabilities that power your competitive advantage.
  • Develop a strategy  that takes advantage of your current capabilities, develops adjacent ones, and mitigates weaknesses to seize the opportunities you spot.
  • Until your new strategy is performing and your competitiveness relies on it,  ensure cash flow and sustainability  through your current healthy lines of products.

Why is IBM so successful?

IBM’s success over its long history can’t be attributed to a single cause.

In each distinctive phase, IBM demonstrated the qualities that enabled it to thrive and pioneer in technological advancements. One consistent quality that allowed IBM to stand the test of time has been its decisive adaptability, the ability to spot new trends and transform its business in time to lead change.

Its corporate culture of respect and hard work has been the cornerstone of every single one of its achievements.

Growth by numbers

Total consolidated revenue

$104,5 b

$79,1 b

$57,3 b

Total consolidated gross profit

$50,3 b

$36,2 b

$31,5 b

Number of employees

434,2 k

366.6 k

282.1 k

 {{cta('eed3a6a3-0c12-4c96-9964-ac5329a94a27')}}

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Waking Up IBM: How a Gang of Unlikely Rebels Transformed Big Blue

Six years ago, IBM was a has-been. Today, it’s an e-business powerhouse. It didn’t turn around by imposing change from the top. It let ideas, initiatives, and enthusiasm bubble up from below. Maybe your company should do the same.

Do you remember when IBM was a case study in complacency? Insulated from the real world by layer upon layer of dutiful managers and obsequious staff, IBM’s executives were too busy fighting their endless turf battles to notice that the company’s once unassailable leadership position was crumbling around them. The company that held the top spot on Fortune ’s list of most admired corporations for four years running in the mid-1980s was in dire need of saving by the early 1990s. Fujitsu, Digital Equipment, and Compaq were hammering down hardware margins. EDS and Andersen Consulting were stealing the hearts of CIOs. Intel and Microsoft were running away with PC profits. Customers were bemoaning the company’s arrogance. By the end of 1994, Lou Gerstner’s first full year as CEO, the company had racked up $ 15 billion in cumulative losses over the previous three years, and its market cap had plummeted from a high of $ 105 billion to $ 32 billion. Armchair consultants were nearly unanimous in their view: Big Blue should be broken up.

case study on ibm

  • Gary Hamel is a visiting professor at London Business School and the founder of the Management Lab. He is a coauthor of Humanocracy: Creating Organizations as Amazing as the People Inside Them (Harvard Business Review Press, 2020).

Partner Center

  • Harvard Business School →
  • Faculty & Research →
  • April 2021 (Revised June 2021)
  • HBS Case Collection

IBM: Design Thinking

  • Format: Print
  • | Language: English
  • | Pages: 21

About The Author

case study on ibm

Srikant M. Datar

Related work.

  • Faculty Research
  • IBM: Design Thinking  By: Srikant M. Datar, Amram Migdal and Paul Hamilton

bombora

  • Sales Intelligence and Revenue Acceleration
  • Demand Identification and Campaign Optimization
  • Churn Reduction and Account Expansion
  • Integrations
  • Market Insights
  • Sample the data
  • Case Studies
  • Reviews and Awards
  • What is Intent data?
  • Intent data for Manufacturers
  • Resource Center
  • Upcoming Events
  • Deep Dive Webinars
  • Full Topic List
  • Privacy Philosophy
  • Customer Support
  • Press Coverage
  • Request a demo

bombora

Press "Enter" to Search, or "Esc" to Cancel

IBM | Case Study

An intelligent abm targeting option for ibm.

As a new tool from IBM, Watson Analytics visualizes data for its users, automatically creating charts and tables, and facilitating quick analyses on what it finds as strong data-proven information.

IBM’s former Director of Performance Media was looking to generate more leads at lower costs compared to business-as-usual tactics. Bombora and LinkedIn  teamed up to help IBM’s Watson Analytics more intelligently target businesses interested in its product. The result — 41% lower cost-per-registration.

case study on ibm

IBM had relied on fairly sophisticated data-driven targeting to drive registrations for a free trial download of Watson Analytics. IBM used LinkedIn’s native targeting capabilities focusing on marketing, information technology, and analytics professionals.  He further honed his targeting using job title, seniority, and professional group.

While this approach performed well, IBM and its agency Neo@Ogilvy , saw an opportunity to use an Account Based Marketing (ABM) approach to boost registrations and at the same time, reduce cost-per-registration by combining LinkedIn with Bombora’s Company Surge ® Intent data.

case study on ibm

Company Surge ® Intent data gave IBM visibility into which of its target businesses were actively researching topics related to Watson Analytics.

By combining Company Surge ® Intent data with LinkedIn’s Sponsored Content and Account Targeting tools, Neo@Ogilvy served in-feed advertising to the right people at organizations interested in tools like Watson Analytics.

IBM layered on functional targeting to reach marketing and technology professionals most likely to engage around Watson messaging.

Bombora identified 12,000 businesses interested in over four topics related to Watson Analytics including big data analytics, data visualization and social analytics. Further, Bombora identified another 40,000 businesses interested in at least one of Watson Analytics’ selected topics.

Armed with this information, Neo@Ogilvy drove a 41% lower cost-per-registration on LinkedIn compared to business-as-usual targeting tactics used in the previous quarter. Additionally, the Bombora-Linkedin Account Based Marketing approach delivered a 19%  lower cost-per-registration than another Q4 tactic that targeted social media professionals.

case study on ibm

“Bombora and LinkedIn helped drive a 41% lower cost-per-registration than the business as usual tactics used in the previous quarter.”

See more case studies

case study on ibm

107% boost in email open rates, 120% increase in CTR

case study on ibm

50% decrease in CPL

case study on ibm

Monetized their ABM strategy

Ready to get started?

case study on ibm

  • Case Studies
  • Flexible Products

case study on ibm

  • Expert Insights
  • Research Studies

case study on ibm

  • Creativity and Culture
  • Management and Leadership
  • Business Solutions

case study on ibm

  • Member Spotlight
  • Employee Spotlight

IBM fuels collaboration and innovation with flexible workspace

With wework, the technology giant created a ‘neutral space’ where they can partner with clients on great ideas.

case study on ibm

Around the world, companies of all sizes find space to succeed at WeWork. Our  case studies share their unique stories.

The challenge: creating the ideal space for teams and clients to work together

In Chicago, IBM was looking for a new way to work with one of their biggest clients. 

Rather than always meeting at the IBM office, Romas Pencyla, vice president and partner at IBM, envisioned an inspiring offsite environment. Pencyla sought a space that would allow his team to engage with that client in design thinking sessions. There, they’d dig into their most pressing challenges and come up with novel solutions.

The solution: a configurable space to support design thinking

IBM sourced a space on one of the top floors of the WeWork National Building , a landmark that was built in 1907.

Dubbed the Cognitive and Advanced Analytics Garage, the space doesn’t resemble a traditional office. It’s large enough to hold nearly 40 desks, but the room is divided into various workspaces, from couches and other types of soft seating to high-topped tables surrounded by stools.

Most of the furnishings are movable; teams can configure the space to suit their needs. It can look totally different from one day to the next, depending on whether teams are using it for meetings, brainstorms, strategizing, or planning.

The result: a ‘neutral ground’ that inspires team and client collaboration

Pencyla calls the space a “neutral ground.” 

“It’s a dedicated space just for IBM and our client,” he says. “Being at a WeWork building lets us both get out of our own office environment where there’s always a meeting, always a call to take, always someone tapping you on the shoulder with a question.”

Pencyla says the work—rapidly putting together proofs of concept on various projects—is intense. But the environment—in the dedicated office and the rest of the WeWork space—fosters creativity.

“Getting people out of a traditional office culture and into a collaborative area accelerates the process for us,” he says. “Both our folks and the client love coming here.”

Pencyla has deemed the space a success.

“This is exactly what we’ve wanted to do with a lot of our clients,” says Pencyla. “We already have a space in New York so we can do the exact same thing. Our vision is to continually use WeWork in this fashion to help us think, collaborate, and innovate with our clients.”

Key highlights

  • Dedicated space for IBM and their clients
  • Fully configurable for activity-based working, with movable furnishings
  • A “neutral ground” where IBM employees and clients can engage in design thinking sessions without everyday workplace distractions
  • Prime location near the top of an iconic Chicago building

WeWork offers companies of all sizes space solutions that help solve their biggest business challenges.

Related articles

case study on ibm

With Transport for London, it’s easy to travel all over the city for the best working environments

Decision trees can help you choose the right course of action.

Used in both marketing and machine learning, decision trees can help you choose the right course of action

case study on ibm

Before you spend a cent on marketing, you first have to understand the market and your customers

  • United States
  • English English

Research leading blockchain use cases

Be inspired by how innovators are transforming their businesses through use cases built on the ibm blockchain platform. you can join an existing blockchain network or work with us to create your own..

Join an existing network Co-create a network with us

Driving auto supply chains forward with blockchain

Automotive assembly line

Until recently, the only way automakers could keep track of the supply chain was through databases and paper trails. That all changed when Renault moved its documentation to blockchain, and invited the rest of the auto industry to join in the digital transformation.

Read the case study  

AI and Blockchain help discover and transact IP

Person standing using keyboard and phone talking with seated person reviewing multiple documents

IPwe helps companies make better use of their intellectual property. Yet the IP transaction platform saw inefficiencies and a lack of transparency in the ecosystem. With IBM Blockchain and AI, it created a suite of products to increase visibility and flexibility within the patent marketplace.

Sonoco and IBM: Safeguarding the efficacy of lifesaving medications with blockchain

Four pharmaceutical vials on assembly line

Transporting temperature-sensitive pharmaceuticals is no easy task, especially with the global distribution of COVID-19 vaccines. With IBM Blockchain Transparent Supply, the many supply chain partners can communicate and gather reliable data.

Watch video (03:11)  

IBM and eProvenance: Preserving the quality and integrity of wine with blockchain

Aerial view of crops seen through clouds

Ensuring the wine you bought in Europe is the exactly what you’re getting in the US. VinAssure is a blockchain-powered platform that supports wine quality and safety, unlocks the supply chain and enhances trust in the cold chain.

Watch video (03:07)  

Blockchain use cases

Solution area All Food Trust Platform Services Blockchain Transparent Supply

Industry All Banking & Financial Markets Consumer Cross Border Payments Government Healthcare Insurance Media and Advertising Services Supply Chain Manufacturing Cross Industry

Sort by Default Most Recent Oldest

Aerial view of crops seen through clouds

supply, blockchain-transparent-supply, , , eProvenance, wine,integrity

Watch the video (03:07)  

Four pharmaceutical vials on assembly line

supply,healthcare

supply,healthcare, blockchain-transparent-supply, , , Sonoco, vaccine,medication

Watch the video (03:11)  

Person standing using keyboard and phone talking with seated person reviewing multiple documents

cross-industry

cross-industry, platform,services, , , IPwe, digital assets,intellectual property,AI,patent

Learn more  

Automotive assembly line

manufacturing,supply

manufacturing,supply, platform,services, , , Renault, automotive

close-up of person using laptop computer

banking,supply

Blockchain for invoice reconciliation and dispute resolution

banking,supply, platform,services, , , ibm,

Read the post  

group of people clapping and smiling inside theather

Reopening venues with contactless blockchain digital ticketing

media, platform,services, , , True Tickets, ticketing,arsht

doctor pulling vaccine with siringe from vaccine bottle

How IBM Blockchain technology powers IBM Digital Health Pass

healthcare, platform,services, , , ibm, healthcare,vaccine,watson

customer in a bank filling in paperwork with assistance from teller

Improving the letter of guarantee banking process with blockchain

banking, platform,services, , , ibm, letter-of-guarantee,bank-guarantee,bci,lygon

a login screen

AAIS: Enabling regulatory compliance and increased data access using blockchain

insurance, platform,services, , , AAIS,

Watch the video (02:00)  

a man wearing glasses

ANZ Bank partners with a consortium to transform financial guarantees using IBM Blockchain

banking, platform, , , ANZ Bank,

Watch the video (01:58)  

hands holding several coffee beans

supply,consumer

Farmer Connect and IBM - Connecting coffee growers and consumers with blockchain

supply,consumer, food-trust, , , Farmer Connect, food-supply-chain,traceability

Watch the video (03:03)  

a woman looking through a window

supply,government,insurance,services

Coordinating disaster recovery efforts with blockchain

supply,government,insurance,services, platform,services, , , ,

a world map made of things

UBS: we.trade offers fast, simple and secure trade transactions based on IBM Blockchain

banking, platform, , , UBS,we.Trade,

Watch the video (02:19)  

two pharmaceutical women

Protect Pharmaceutical Product Integrity with the Pharmaceutical Utility Network

supply, platform, , , Walmart,Merck,KPMG, drug-supply-chain-security-act

fire in a stove

The Vertrax Blockchain is reshaping the oil and gas supply chain with the first multi-cloud deployment of IBM Blockchain Platform

supply, platform,services, , , Vertrax, multi-cloud

Read the case study (629 KB)  

fisher fishing some fresh seafood

Food Trust and Raw Seafoods: Connecting seafood suppliers and distributors with blockchain

supply,consumer, food-trust, , , Raw Seafoods, food-supply-chain,traceability

Watch the video (02:28)  

giant rolls of plastic

Blockchain helps trace responsibly produced raw materials

supply, platform, , , RCS Global,

aerial view of some containers

TradeLens and Blockchain Technology Supply Chain Demo

supply, platform, , , Tradelens,

Watch the video (03:32)  

a woman in contrast with a window

Diamonds are forever with a secure blockchain - The Everledger story

supply,consumer, platform, , , Everledger,

Watch the video (01:17)  

a woman cathing trash from the beach

Plastic Bank: Enabling plastic recycling and financial inclusion with IBM Blockchain

banking, platform,services, , , Plastic Bank,

Watch the video (02:06)  

aerial view of some workers

Nuarca transforms proxy voting using blockchain

government, platform,services, , , Nuarca, voting

a man holding a smartphone in front of a laptop

INBLOCK: Improving cryptocurrency security with Blockchain and LinuxONE

banking, platform, , , INBLOCK, security

Watch the video (02:03)  

two men in contrast of the sun

Marsh: Transforming proof of insurance with blockchain

insurance, platform,services, , , Marsh,

Watch the video (01:50)  

bottom view of buildings

Helping companies trade seamlessly with IBM Blockchain

banking, platform,services, , , we.Trade,

facade of a market with Carrefour logo in it

Carrefour sales boosted by blockchain tracking

supply,consumer, food-trust, , , Carrefour, food-supply-chain,traceability

Learn more (link resides outside ibm.com)  

a woman using her smartphone

Blockchain for advertising: The new black for media buying

media, platform, , , Mediaocean,

a man harvesting some vegetables

Albertsons Joins IBM Food Trust Blockchain Network To Track Romaine Lettuce From Farm To Store

supply,consumer, food-trust, , , Albertson, food-supply-chain,traceability

a world map made of vegetables

IBM and Twiga Foods Introduce Blockchain-Based MicroFinancing for Food Kiosk Owners in Kenya

banking, platform, , , Twiga,

a doctor filling up a form

Transform healthcare outcomes with the simplicity of IBM Blockchain

healthcare, platform,services, infographic, , ibm,

See the infographic (188 KB)  

two people inspecting a car damaged

insurance,banking

Blockchain in insurance: Five reasons why openIDL will succeed

insurance,banking, platform, , , AAIS,AIG,Standard Charter,Marsh,

Watch the video (04:11)  

Blockchain illustration with pictograms representing insurance, transparency and payment

Building trust and transparency in insurance policies with blockchain

insurance, platform,services, , , AIG,

Watch the video (02:57)  

a man wearing glasses

healthcare,government

CDC leverages IBM Blockchain technology for Electronic Health Records

healthcare,government, workshop, video, , CDC,

Watch the video (01:39)  

a drawing of a man

Banks team with IBM for blockchain powered trade finance

banking, platform,services, , , we.trade,

Watch the video (02:44)  

No use case match your criteria

What will we solve together?

Join an existing network and take advantage of innovation in your industry or use established networks to build your own. Either way, we have resources ready to help you.

Contact us to learn more

   

Co-create a network with us

All press releases

 alt=

IBM Study: C-Suite Confidence in Delivering Basic IT Services Wanes, While Tech CxOs Focus on Gen AI Demands

case study on ibm

ARMONK, N.Y. , Aug. 21, 2024 / PRNewswire / -- A new IBM (NYSE:  IBM ) Institute for Business Value study found that while IT leaders are preparing organizations for accelerated generative AI adoption, C-suite executives' confidence in their IT team's ability to deliver basic services is declining.

IBM Corporation logo. (PRNewsfoto/IBM Corporation)

The global study * of 2,500 of C-level technology executives (tech CxOs) from 34 countries revealed that less than half (47%) of those surveyed think their IT organization is effective in basic services compared to 69% surveyed in 2013. Today, only 36% of surveyed CEOs and 50% of surveyed CFOs believe IT is effective at basic services, down from 64% and 60%, respectively since 2013.

At the same time, 43% of surveyed tech CxOs say their concerns about their technology infrastructure have increased over the past six months because of generative AI, and they are now focused on optimizing their infrastructure for scaling generative AI. Respondents report they are currently spending 29% more on hybrid cloud than AI, and, over the next two years, they expect to spend half (50%) their budget on hybrid cloud and AI combined.

As tech CxOs prioritize generativeAI-ready infrastructure investments, two-thirds of surveyed CEOs cite that a strong tech CxO and CFO collaboration is critical to their organization's success. However, a disconnect exists: only 39% of surveyed tech CxOs say they collaborate with finance to embed tech metrics into business cases, and just 35% of surveyed CFOs report being engaged early in IT planning to set strategic expectations. Among the high-performing tech CxO respondents, the study found that organizations that connect technology investments to measurable business outcomes report 12% higher revenue growth.

"Tech leaders today are grappling with multiple business demands, made even more complicated by the rise of generative AI. They must navigate the challenges of modernizing their IT infrastructure and scaling generative AI to support the business' core competitive advantage, " said Mohamad Ali , Senior Vice President, IBM Consulting. "In this evolving AI landscape, the relationship between tech CxOs and their finance counterparts has never been more important, aligning technology spend with business outcomes to drive real value from AI investments."

Responsible AI is top of mind for tech CxOs, but there is a gap between intention and actions

  • For the majority (80%) of CEOs surveyed, transparency in their organization's use of next-generation technologies, such as generative AI, is critical for fostering trust.
  • Only half (50%) of respondents say they are delivering on key responsible AI capabilities for explainability, and even fewer say they are delivering capabilities for privacy (46%), transparency (45%) and fairness (37%).
  • 41% of tech CxOs surveyed reported an increase in their concerns about regulation and compliance as a barrier to generative AI over the last six months.
  •  However, most (70%) tech CxO respondents see regulatory change as an opportunity versus only 50% of CEOs .

Tech CxOs are driving their organizations to rethink their talent strategy to meet the needs of the generative AI era

  • 63% of tech CxOs surveyed agree that their competitiveness will hinge on their ability to attract, develop and retain top talent.
  • Over the next 3 years, tech executives anticipate a surge in skill scarcities over key areas, including cloud (+36%), AI (+29%), security (+25%) and privacy (+39%).
  • 40% of respondents report an increase in their concern over the past six months.
  • More than half (54%) of tech CxOs surveyed blame financial pressures for hindering their ability to invest in technology talent.
  • Many tech CxOs surveyed (69%) say they are turning to business partners as a source for specialized skills

To view the full study, including recommendations for technology leaders, visit: https://www.ibm.com/thought-leadership/institute-business-value/en-us/report/cxo

*Study Methodology The IBM Institute for Business Value (IBV), in cooperation with Oxford Economics, surveyed 2,500 C-suite technology leaders, including Chief Technology Officers (CTOs), Chief Information Officers (CIOs), and Chief Data Officers (CDOs) from 34 countries and 26 industries during Q1 2024. The IBM IBV data analytics team performed a series of in-depth analyses and data transformations to identify a group of high-performing technology organizations corresponding to clear outperformance on a variety of financial and operational measures. The study also includes data from the 2024 CEO Study and upcoming 2024 CFO Study.

The IBM Institute for Business Value, IBM's thought leadership think tank, combines global research and performance data with expertise from industry thinkers and leading academics to deliver insights that make business leaders smarter. For more world-class thought leadership, visit: www.ibm.com/ibv .

About IBM IBM is a leading provider of global hybrid cloud and AI, and consulting expertise. We help clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries. Thousands of government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM's hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently and securely. IBM's breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and consulting deliver open and flexible options to our clients. All of this is backed by IBM's long-standing commitment to trust, transparency, responsibility, inclusivity and service.  Visit  www.ibm.com for more information.

Media Contact Marisa Conway IBM Communications [email protected]

Release Categories

  • Artificial intelligence
  • Generative AI
  • Hybrid cloud
  • Research and innovation
  • Social impact

Additional Assets

Understanding Data Movement in Tightly Coupled Heterogeneous Systems: A Case Study with the Grace Hopper Superchip

Heterogeneous supercomputers have become the standard in HPC. GPUs in particular have dominated the accelerator landscape, offering unprecedented performance in parallel workloads and unlocking new possibilities in fields like AI and climate modeling. With many workloads becoming memory-bound, improving the communication latency and bandwidth within the system has become a main driver in the development of new architectures. The Grace Hopper Superchip (GH200) is a significant step in the direction of tightly coupled heterogeneous systems, in which all CPUs and GPUs share a unified address space and support transparent fine grained access to all main memory on the system. We characterize both intra- and inter-node memory operations on the Quad GH200 nodes of the new Swiss National Supercomputing Centre Alps supercomputer, and show the importance of careful memory placement on example workloads, highlighting tradeoffs and opportunities.

Index Terms:

[Uncaptioned image]

I Introduction

Heterogeneous platforms are dominant in modern-day large-scale computing. GPUs are ubiquitous in the fields of HPC and AI and have permitted the growth of workloads to unprecedented scales [ 1 , 2 , 3 ] . Recent breakthroughs in generative AI are made possible by the availability of computational resources, with the need for memory and computation growing steadily. With Large Language Models (LLMs) breaking the trillion parameter size, memory is a critical resource that enables large-scale training and inference [ 4 ] .

GPUs were born as standalone accelerators. An application is run as a program on the CPU, which is responsible for orchestrating the execution of computational kernels on the GPU and for managing memory allocations and data transfers. Data transfers are critical for the performance of applications regardless of the access pattern, that can range from frequent low-latency communications to bulk copies [ 5 , 6 , 7 ] . The importance of optimizing data movement has been reflected in the development of data allocation and management APIs, unified memory systems, more advanced interconnects, and new programming models [ 8 , 9 ] . With the NVLink-C2C (C2C) interconnect, allowing for fast, low latency, cache coherent interaction between different classes of chiplets, NVIDIA has marked the beginning of a new class of high-end tightly coupled systems where every Processing Unit (PU) has complete access to all main memory on the system through a unified memory space.

NVIDIA’s GH200 Grace Hopper Superchip (GH200) connects an ARM CPU and a Hopper GPU through the C2C interconnect. Multiple GH200s can be connected to create a large-scale tightly coupled heterogeneous system. We explore the performance characteristics of the Quad GH200 system, the building block of the new Swiss National Supercomputing Centre Alps supercomputer, through a series of microbenchmarks. Many works already look at benchmarking heterogeneous systems and how to program them [ 10 ] , studying the performance characteristics of heterogeneous workloads [ 11 , 12 , 13 , 14 , 15 ] or executing simple parallel programs on different architectures [ 16 , 17 ] .

Some works already looked at the capabilities of the GH200 shared memory system, offloading automatically BLAS kernels to the GPU in CPU-only scientific codes [ 18 ] and studying the effect of different memory allocation policies on different applications [ 19 ] .

The objective of our microbenchmarks is to analyze the interaction between all PUs, physical memories, and memory allocation APIs of the system, highlighting tradeoffs and opportunities. Growth in computing power was not matched by improvements in memory access latency and bandwidth, resulting in many applications becoming memory-bound. Data movement is now the dominant factor for HPC and ML workloads, and its optimization is crucial to application performance [ 20 , 21 , 22 , 23 ] .

Tightly coupled systems, like the Quad GH200, greatly expand the design space for these optimizations, allowing for a much larger choice when deciding where to place data and compute. Having access to a larger pool of memory opens up new possibilities for scaling applications with large memory footprints that go beyond what is directly available to a single GPU or CPU. As this pool is heterogeneous, informed data placement is of crucial importance.

We make the following contributions:

We design a comprehensive set of microbenchmarks to analyze memory operations in complex tightly coupled heterogeneous systems 1 1 1 code at https://github.com/luigifusco/gh_benchmark .

We validate our datapath-oriented approach by highlighting the effect on the performance of sample workloads of different data placement policies.

We present a comprehensive analysis of the Quad GH200 node of the Alps supercomputer.

II Background

We describe the Grace Hopper Superchip and the architecture of the tested system (Figure 1 ) with a bottom-up approach, starting from the description of its fundamental hardware components up to the software level, focusing on memory subsystems and management.

II-A The Architecture

The NVIDIA GH200 Grace Hopper Superchip (GH200) is a heterogeneous coherent system that combines a Grace CPU and a Hopper GPU. For the rest of the paper, we refer to any of these two types of chips as Processing Units (PU).

II-A 1 NVLink-C2C

NVLink is an interconnect technology originally designed by NVIDIA as an alternative to PCIe providing additional features and targeting multi-GPU systems. It evolved through generations improving link speeds, and has reached its fourth iteration. NVLink-C2C (C2C) extends the NVLink family with a high-speed interconnect to engineer integrated devices built by combining multiple chiplets. It allows fast and cache coherent communication between different classes of PUs. Its architecture allows for a bandwidth of 40 Gbps for every data signal, with every link supporting 9 data signals. In a GH200, Grace and Hopper incorporate ten links each, for a total bandwidth of 450 GB/s per direction [ 24 ] .

C2C supports the Arm AMBA Coherent Hub Interface (AMBA CHI) architecture, which defines a scalable and coherent hub interface and on-chip interconnect [ 25 ] . AMBA CHI allows for modular designs. It supports cache coherency at the 64-byte granularity, snoop filtering, and different cache models with data forwarding, atomic operations and synchronization, and virtual memory management.

II-A 2 Grace

Grace is an HPC-oriented CPU incorporating 72 Arm Neoverse V2 CPU cores, a 64-bit data center-oriented architecture. It supports up to 480GB of LPDDR5 ECC memory with a bandwidth of up to 500GB/s. The CPU cores are distributed throughout the Scalable Coherency Fabric, a mesh fabric providing up to 3.2TB/s of total bisection bandwidth, integrating CPU cores, memory, system IOs, and C2C connections. Each core has 64KB of L1 instruction cache, 64KB of L1 data cache, and 1MB of L2 data cache. The size of the shared L3 data cache is 114MB. The cache coherency protocol is MESI with inclusive L2 cache [ 26 ] .

II-A 3 Hopper

Feature GH200 H100 SXM H100 PCIe
SMs 132 132 114
Memory Type HBM3 HBM3e HBM3 HBM2e
Memory Size 96GB 144GB 80GB 80GB
Bandwidth 4TB/s 4.9TB/s 3.35TB/s 2TB/s

The Hopper architecture, the successor of the Ampere architecture, was launched by NVIDIA in 2022. It is designed for data center use and is parallel to the consumer-oriented Ada Lovelace architecture. It is currently employed in the variants of the H100 GPU and GH200. Table I lists the main hardware characteristics of these GPUs. Two variants of Hopper in GH200 exist. We have access to the one with 132 Streaming Multiprocessors (SMs) and connected to 96GB of HBM memory with a bandwidth of over 4TB/s. Each SM has 256KB of private L1 cache and can run up to 2048 concurrent threads, for a maximum total of 270,336 concurrent threads. 52MB of L2 cache are shared between all SMs. It supports up to 18 NVLink 4 connections at 25 GB/s per direction each, for a maximum bandwidth of 450 GB/s per direction.

II-A 4 Grace Hopper

The C2C interconnect provides cache-coherent memory access between the Grace CPU and the Hopper GPU on a GH200. The same technology is used to connect the two Grace PUs in the Grace Superchip, combining two Grace CPUs, and will be used in the recently announced Grace Blackwell Superchip, combining a Grace CPU and two Blackwell GPUs. The interconnect provides a bidirectional bandwidth of 900GB/s, 7x higher than what is achievable by the H100 which uses PCIe 5 with a bidirectional bandwidth of 128 GB/s. Up to 32 GH200 can be connected through the NVIDIA NVLink Switch System. All interconnected GH200 will act as a single cache coherent system. This allows all Hoppers to communicate with with each other at a bidirectional bandwidth of 900 GB/s, for a total of 19.5 TB of shared memory in a single cache coherent system supporting direct load, stores, and atomic operations. A single shared memory system is also provided in the Quad GH200 configuration, composed of four fully interconnected Superchips through NVLink. This configuration offers a lower interconnection bandwidth, as the 18 links that the Hopper architecture provides are equally split between three channels. In the rest of the paper, we refer to this system as being composed by peer GH200s.

II-A 5 Alps Supercomputer

For our analysis, we use the early access Santis partition of the Alps supercomputer developed by the Swiss National Supercomputing Centre (CSCS), which is currently under provisioning. The system is made up of HPE Cray Supercomputing EX254n blades, each hosting two nodes. Figure 1 provides an overview of a node 2 2 2 GH200 image source: https://developer-blogs.nvidia.com/wp-content/uploads/2022/08/image3-8.png . Each node is composed of four interconnected GH200. Every Superchip is connected to every other through NVLink and a cache coherent interconnect, from now on defined as Grace Interconnect (GI). Grace traffic to peer Superchips is routed through the GI interconnect, while Hopper traffic is routed through NVLink.

Every GH200 has 96 GB of HBM3 and 128 GB of LPDDR5 memory, for 896 GB of total memory in its final configuration 3 3 3 at time of writing, only 120 GB of LPDDR5 memory per GH200 were available . Every Quad GH200 node acts as a single NUMA system, with 288 CPU cores and 4 GPUs. Nodes are interconnected using HPE Slingshot 11 [ 27 ] in a dragonfly topology [ 28 ] , with 4 injection ports per node. Each NIC is connected to a separate GH200 through a mezzanine card with two PCIe x16 Gen5 connections, for a total of 32 lanes. Each GH200 is connected to a switch through a 200 Gb/s ethernet port, for a total of 4 ports per node, and a maximum achievable bidirectional bandwidth of 100 GB/s to other nodes. The system page size is configured to be 64 KB. The clock speed is 3,483 MHz without frequency boost. At the time of writing the system runs a software stack that will change once in production. The nodes run SUSE Linux Enterprise Server 15 SP5. The NVIDIA driver is version 535.129.03. We use CUDA 12.3 and GCC 12.3.

II-B Memory Hierarchy

Refer to caption

The performance of modern computing systems is often limited by memory access speed. Processors have evolved to include larger and more complex cache hierarchies, with faster access to memories close to where the computation is performed. This hierarchical approach is also applied when designing main memory systems, fueled by a growth in the number of processors to serve. In these complex systems, the datapath depends on which physical memory bank is accessed. Compute units have direct access to some memory controllers, but need to send requests through an interconnect to access others. This adds complexity when designing applications, as complex interactions between PUs and memories emerge [ 29 , 30 , 31 ] .

Non-Uniform Memory Access (NUMA) is a logical division of memory supported by modern operating systems. It consists of defining the affinity of cores to different regions of memory. Every GH200 is composed of two NUMA nodes. One consists of the LPDDR5 memory with affinity to Grace, while the other consists of the HBM3 memory with affinity to Hopper. Interconnected Superchips in a Quad GH200 appear as a single device with two NUMA nodes per unit. Figure 1 highlights the different NUMA nodes that compose a node in the Alps supercomputer. The four Grace CPUs are associated with NUMA nodes 0, 1, 2, and 3, following a sequential numbering. The four Hopper GPUs are associated with NUMA nodes 4, 12, 20, and 28.

Physical allocation on NUMA nodes follows a first access principle for memory allocated through system calls like brk or mmap , and for APIs like malloc and new . Alternatively, the NUMA node can be explicitly chosen by using the numactl utility command, through libnuma, or using numa_alloc_onnode , which lets the user specify the id of the NUMA node where to allocate memory.

II-C Unified Memory

Type API Placement GPU Translation Page Size Automatic Migration
System-allocated mmap, malloc, new First touch ATS System Yes (CUDA 12.4)
numa_alloc_onnode Specified No
Device cudaMalloc HBM GPU-MMU 2 MB No
Managed cudaMallocManaged First touch ATS (DDR) GPU-MMU (HBM) System (DDR) 2 MB (HBM) Yes
Pinned cudaMallocHost, cudaHostAlloc DDR ATS System No

Refer to caption

Heterogeneous systems typically require fine-grained control of memory, with research focusing on facilitating its management [ 32 , 33 , 34 ] . Memory allocations are device-specific and live in separate memory spaces. Memory allocated using cudaMalloc can be accessed only by the GPU, and memory transfers between the GPU and CPU address space require the utilization of specific copy APIs like cudaMemcpy .

Unified Virtual Addressing (UVA) simplifies this programming model by enabling devices (GPUs) to share a unified address space with the host (CPU), allowing every PU on the system to access memory using the same pointers. It was first introduced with CUDA 4 and allowed easy access to peer GPU memory in a multi-GPU system and zero-copy access to host pinned memory allocated using cudaMallocHost or cudaHostAlloc through DMA over PCIe. Managed Memory was introduced with CUDA 6. By calling cudaMallocManaged the user can obtain the pointer to a Managed Memory region that is accessible by both the CPU and the GPU through automatic copies. In Kepler architectures calling cudaMallocManaged results in memory being allocated on the device. As the CPU accesses this memory a page fault is triggered and the CUDA driver migrates the page from the device to the host. On kernel launch, all managed memory is migrated to the device. Pascal introduced the support for device page faults and migrations, removing the need for all memory to be copied on the device on kernel launch and introducing a first touch policy for the physical allocation of pages.

In a Linux system the page table stores virtual to physical address translations. These translations are cached in a translation-lookaside buffer (TLB) for faster access. The Memory Management Unit (MMU) is responsible for performing these translations. In a traditional system, the CPU and GPU have distinct MMUs and TLBs and work on separate virtual and physical addresses using separate page tables. Address Translation Services (ATS) extend the PCIe protocol to support caching of address translations. A miss in the device MMU will result in an Address Translation Request to the CPU. The CPU checks its page tables for the virtual-to-physical mapping for that address and supplies the translation back to the GPU, which will store it in its local Address Translation Cache [ 35 ] . Address Translation Services (ATS) were introduced in CUDA 9.2 for integration in IBM Power 9 systems using Volta GPUs through NVLink connections, and allowed for fine-grained access to memory, serving loads and stores at the cache line level [ 36 ] .

In a Grace Hopper system, a single page table and virtual address space are shared between CPU and GPU. A specific unit called Address Translation Service Translation Buffer Unit (ATS-TBU) is implemented to provide fast translations and support interaction between all MMUs and TLBs on the system. Managed memory, on the other hand, requires a full page transfer if memory is not local on the PU.

Managed memory was already shown to outperform system memory accessed through ATS in applications with frequent memory access performed by the GPU [ 36 ] . To give an idea of the difference between the two systems we develop a simple application that interleaves a series of back-to-back Hopper-issued writes with Grace-issued writes. Hopper writes use cudaMemset while Grace writes use either memset or strided stores. By setting the stride to the page size of 64 KB we test the worst use case for managed memory, with the least number of bytes used per bytes transferred.

Figure 4 shows the time it takes to execute the application with different numbers of iterations and different types of allocations (lower is better). The runtime of managed memory gets asymptotically close to system-allocated memory on HBM. Managed memory results in a faster runtime only after a large number (at the 128 mark) of back-to-back iterations on Hopper. For a large number of Hopper iterations, there is no difference between the full write and strided write version, showing that the workload is heavily GPU-bound. For a low number of iterations, the full write version shows better performance on DDR than on HBM, showing that the workload is CPU-bound. ATS shines in workloads where the access patterns are more complex than sequential reads and writes.

Table II provides an overview of the different types of memory that can be allocated on the GH200, describing the available APIs and the physical placement of the allocated memory. System-allocated memory is made accessible by both host and device through ATS, and benefits from fine-grained access through the C2C interconnect. cudaMalloc allocates memory on the GPU. It is the only listed API that does not allow direct CPU access. cudaMallocHost allocates pinned memory on the host, allowing for streaming access from the device through DMA engines. This is necessary for cudaMemcpyAsync to work asynchronously, and can improve CPU-GPU memory transfer speeds [ 37 ] . cudaMallocManaged allocates managed memory in a uniform memory space shared between host and device and is managed by the NVIDIA driver at a page level.

Refer to caption

As an illustrative example of how different physical allocations, allocation APIs, and data movement APIs interact together in complex ways we evaluate the bandwidth achieved by cudaMemcpy on different types of memory, and show the results in Figure 5 . As cudaMemcpy employs different implementations based on the type of source and destination memory, the test can validate its optimality. The results shown ignore the first warmup run and perform repeated iterations. For this reason, memory allocated through cudaMallocManaged will always have optimal placement. Pinned memory is allocated on the GH200 local to the GPU. We tested also cudaMemcpyAsync , which showed comparable performance. Our results show that the slower host-based implementation is used for all system allocated memory regardless of physical placement.

III Microbenchmarks

We motivate, describe, and show the results of our memory movement-oriented microbenchmarks. We provide a thorough analysis of the bandwidth and latency of memory operations issued by different PUs on the different main memories of the system.

III-A Datapaths

A tightly coupled system like the Quad GH200 is composed of many parts communicating through interconnects with different characteristics and involves different hardware and software subsystems.

Refer to caption

III-B Methodology

We call kernel every function that performs a benchmark, excluding all synchronization and measurement code. We develop kernels for both CPU using C++ and inline ARM assembly, and GPU using CUDA. These kernels perform simple memory operations on buffers allocated in different ways and passed as input arguments. Unless stated otherwise, all showed numbers are an average over 10 measurements, discarding the first warmup run.

III-B 1 Timers

CPU times are obtained by reading the cntvct_el0 register, a virtual counter that is globally available and uniform across all cores and all GI interconnected GH200. Its frequency can be queried by reading the cntfrq_el0 register and is 1 GHz on our system. We observe a clock resolution of 32 ns by issuing subsequent reads to the timer.

CUDA supports two on-device timers. The %clock register can be queried with the clock and clock64 library calls. Reading it on different SMs will produce different values. It advances at the device clock speed, which can be queried with cudaGetDeviceProperties and is 1.98 GHz on our system. We observe a clock resolution of 7 cycles or 3.54 ns. The %globaltimer register can be queried by writing explicit PTX instructions using inline assembly. To find its clock speed we run an experiment consisting of querying the %clock and the %globaltimer registers in succession at intervals, and observing the difference increase we find that that the frequency of %globaltimer is of 1 GHz. We observe a clock resolution of 32 ns, the same for the cntfrq_el0 system timer. The two timers are not directly comparable as they yield unrelated values.

III-B 2 Multithreaded Benchmarks

Our test infrastructure allows us to control how many computational resources are used in a kernel. For Grace, we spawn and pin threads to separate cores on application startup. In multi-threaded benchmarks, buffers are equally divided among the threads, such that each one of them works on an equal number of non-overlapping sequential cache lines. A control thread is responsible for choosing a start time step for the test, which is selected by reading the clock from the cntfrq_el0 register and incrementing it by a fixed amount. The control thread communicates to all threads the kernel to run, its arguments, and the chosen start time step. All threads read the clock until the start time step is reached, execute the test, and record the final time step. The total time is taken as the maximum among all final time steps minus the initial time step.

The test setup on Hopper makes use of the cooperative threads API to enable grid-level synchronization. This restricts the grid and block size to be limited such that all threads are active at once, and the kernel must be launched using cudaLaunchCooperativeKernel . A thread selects a starting time step in a way that is analogous to what happens on the CPU by reading the %globaltimer register and communicates it to all other threads by writing it to global memory. Our tests show that a call to __syncthreads is necessary after grid synchronization for all threads to be aligned and correctly spin waiting for the correct time step before starting the test. As global GPU memory is optimized for coalesced access, all threads access memory in a strided fashion, with a stride equal to the number of threads in the grid, and an initial offset equal to the id of the thread. The end timestep is recorded in shared memory for all threads, and the maximum is taken.

III-C Read and Write

The CPU write kernel uses STP to store 16 bytes using a single instruction. The GPU write kernel stores 8 bytes at a time. Different techniques are used to program the read-only kernel. On CPU, the LDP ARM assembly instruction is explicitly issued. LDP loads two doublewords from memory and stores them into two registers, effectively moving 16 bytes with a single instruction. CUDA kernels cannot rely on this method, as inline PTX instruction can still be optimized out by successive compiler passes. Instead, dummy work is performed at the thread level, in the form of a XOR operation on the read value. Issuing a sufficient number of read operations per cycle is fundamental in achieving peak bandwidth for kernel launches with few threads. We find that using the ulonglong2 datatype achieves the best bandwidth for launches with one block of 1024 threads.

Refer to caption

We also want to measure the bidirectional bandwidth of the C2C interconnect as well as the ability of the system to handle a large number of memory operations issued by multiple PUs on the same GH200. To do this, we develop a Grace and a Hopper noise kernel that continuously reads from a large buffer of 8 GB. To stress the C2C interconnect, the Grace noise kernel reads HBM system allocated memory and the Hopper noise kernel reads DDR allocated memory. We start the noise kernel for one PU and run the read and write tests for the other PU.

We report our results in Figure 7 . We show the achieved bandwidth in GB/s as well as the ratio of achieved bandwidth over maximum theoretical bandwidth according to Figure 3 . In the simple benchmarks, Hopper is better at making use of the C2C interconnect when accessing local memory compared to Grace, with read and write bandwidth to DDR of 93% and 84% respectively, compared to the 53% and 64% achieved by Grace in operation to HBM. Operations that cross both the C2C interconnect and NVLink incur considerable overheads and never go above 60% of theoretical bandwidth.

When adding noise, accesses to peer GH200 memory are not affected. Bandwidth to local DDR is limited by the bandwidth of 500 GB/s that needs to be split between two PUs. Writes to HBM are the most impacted, with a Grace bandwidth of 17% and a Hopper bandwidth of 65% of the theoretical maximum. Summing the bandwidth of both PUs 2682.8 GB/s are reached, which is only 67% of the theoretical maximum.

Refer to caption

In the Grace copy kernel, a single loop iteration contains four pairs of LDP and STP instructions on separate pairs of registers to ensure pipelining. The Hopper kernel performs 8-byte wide copies in a stride equal to the number of threads. Throughput is measured as the size of the buffer over transfer time. The benchmarks highlight the duplex characteristics of some interconnects on the system. For example, copying from DDR to DDR results in a measured bandwidth that is about half the bandwidth of a read operation.

Results for both Grace and Hopper are shown in Figure 9 . We note the following behaviors:

There are asymmetries in memory transfers. Grace achieves a higher throughput when copying from local memory to a peer GH200 compared to the opposite direction. Local DDR to HBM transfers are faster than HBM to DDR transfers.

Hopper does a better job at utilizing the available bandwidth when crossing multiple interconnects.

Figure 10 shows the scalability of the copies performed by Grace and Hopper for the cases of system-allocated memories that reside on a single GH200.

III-E Latency Benchmarks

We measure the latency of main memory accesses and of core-to-core communication.

III-E 1 Pointer Chase

Refer to caption

To measure the access latency to memory we employ a pointer chase benchmark. We modify Google’s multichase benchmark [ 38 ] to support NUMA allocations and a pointer chase GPU kernel. The benchmark performs a pointer chase for 2.5 seconds, recording the number of accesses with a granularity of 200. We show memory access latency in Figure 11 . Accesses that cross the C2C interconnect (Grace to HBM and Hopper to DDR) show the same latency. The same behavior is also shown by access to peer HBM. We show the scalability of memory access latency on increasing buffer size in Figure 12 . As the pointer chase iterates on the same buffer multiple times, we can also measure the latency of accesses to the cache. Cache sizes are highlighted using vertical lines. Hopper displays a simple caching behavior, with all types of memories showing the same latency if the buffer size is within some cache bound. Hopper, on the other hand, shows a behavior that is dependent on where the memory is physically allocated. We make the following key observations:

If the buffer fits in Hopper L1 cache, the latency is the same regardless of the physical allocation. For buffers larger than L1 the behavior changes for all types of physical memory allocations.

For memory physically allocated on DDR the Grace L2 cache affects latency. For buffers larger than Grace L2 the latency never changes, showing that these accesses are never cached.

Hopper L2 cache can cache data that is physically allocated on HBM, both local and peer. L2 resident peer HBM accesses are faster than local DDR accesses.

This behavior highlights great differences between how cache coherency and memory accesses are handled between Grace and Hopper.

III-E 2 Ping-Pong

Refer to caption

We run a ping-pong benchmark to evaluate the communication latency inside a PU and between different PUs. This benchmark highlights the characteristics of the cache coherent interconnects, as well as the behavior of the cache coherency protocol when dealing with atomics. We leverage the atomic compare-and-swap (CAS) operation and the interoperability of atomic types between host and device offered by CUDA with the cuda::std::atomic type. CAS can be used as an atomic conditional store, writing a desired value to a memory location if it contains an expected value. To avoid any spurious contention, an atomic flag of one byte is placed in a chunk of memory of the size of two cache lines. The atomic flag is initialized with a PONG value. Two threads, one running the ping function and the other running the pong function, are started on the PUs of interest. The ping function and the pong function have an implementation both for the host and for the device. The ping function conditionally sets the flag to PING if its value is PONG, while the pong function does the opposite. Time is measured since the first successful communication. The ping pong host functions are compiled using GCC 13.2 with support for Neoverse-v2 to enable the issue of the ARM CASB instruction.

Core-to-core tests show non-uniform latencies depending on where the memory is physically allocated. This is a result of the cache coherency protocol and the way that atomic operations are requested and executed. Figure 13 shows the results of the ping-pong benchmark being run between different PUs with different allocations of the flag. All allocations are done on GH-0. Exchanges involving the GH200 where the memory is physically allocated are faster, with local exchanges being the fastest. Hopper-Hopper communications benefit greatly from memory being physically allocated on HBM. All Grace-Hopper communications benefit from memory being allocated on the HBM of the Hopper participating in the communication, except for local Grace ping to remote Hopper pong. Grace-Grace communication is faster if the memory is allocated on a participating GH, but shows no difference based on the type of memory.

III-F Internode Benchmarks

We evaluate the bandwidth of internode communications in the Alps supercomputer. Our benchmarks are based on MPI, the dominant programming model for parallel and distributed architectures. We use Cray MPICH 8.1.28 on top of Libfabric 1.15.2. We activate the GPU Transport Layer (GTL) to handle buffers allocated with cudaMalloc directly using DMA.

We run a node-to-node network bandwidth test using MPI_Isend and MPI_Irecv . In Figure 14 we show the results scaling both buffer size and the number of processes per node, in both the unidirectional case and the bidirectional case. As an MPI process can utilize only one network interface, four processes are required to utilize the full bandwidth of a node. The amount of data to be transferred is equally split among the processes of the node. Processes are assigned to different NUMA nodes. We show the results for allocations on local DDR only as we find negligible differences when using other types of allocations.

Only one of the two nodes performs the measurement. Analogous to our multithreaded benchmarks, one of the processes of the measuring nodes selects a starting time and communicates it to all other local processes. The final time is taken as the maximum among all local processes.

Refer to caption

IV Applications

We show the performance results of simple applications as a function of the physical memory placement, using the same framework and terminology as our microbenchmarks.

Refer to caption

Type Tensor Core TFLOPS
FP16 989
TF32 494
FP32 67
FP64 67

Matrix-matrix multiplication is a fundamental building block of scientific computing applications. Its importance led to the development of the Tensor Core, a dedicated multiply-and-accumulate unit on NVIDIA GPUs [ 39 ] . The rise of Large Language Models (LLMs) and the transformer architecture [ 40 ] has made the performance of this operation critical for machine learning workloads as well.

Figure 15 shows the performance of a GEMM operation multiplying source matrices A 𝐴 A italic_A and B 𝐵 B italic_B . Our tests show that reads have the greatest impact, while the placement of the destination matrix has a negligible effect on performance. We show data for experiments where the destination matrix C is always placed in local HBM memory. The implementation is provided by cuBLAS 12.3 and uses a single Hopper. We measure the performance achieved using different data types. We report in Table III the advertised TFLOPS for the H100 SXM GPU for different datatypes, which are analogous to the Hopper in GH200. Every matrix is 4 GB. We find that this size is enough to hide the effect of caches and reaches asymptotic throughput.

We also measure the performance scalability with increasing matrix size and show the results in Figure 16 for fixed physical memory allocation and varying datatype, as well as fixed datatype and varying the physical memory allocation. We round down the size of the matrix to be a multiple of 8, to make sure that the tensor cores are fully utilized.

We observe that except for FP16, where smaller sizes that fit in the cache see a higher throughput, HBM provides enough bandwidth to make the workload compute-bound. As soon as one of the matrices is moved away from HBM, the performance drops and the workload becomes heavily memory-bound, especially for the datatypes making use of Tensor Cores. The access patterns for matrices A 𝐴 A italic_A and B 𝐵 B italic_B are different. This is reflected in the asymmetry of the matrices in figure 15 .

IV-B LLM inference

Refer to caption

The growth in the size of LLMs has led to memory footprint becoming a fundamental problem in their training and deployment [ 41 , 42 ] . Having access to a larger pool of memory opens up opportunities to run these workloads using fewer machines.

We run an inference workload in the Llama2-7b and Llama2-13b models [ 43 ] using the HuggingFace APIs to generate 100 tokens from an empty prompt, using the torch.float16 datatype, using PyTorch 2.2.0 [ 44 ] and Python 3.11.7. We use the pluggable allocator functionality of the PyTorch library to control memory allocations. Due to stability and performance issues, small allocations (less than 1 MB) are done using cudaMallocAsync , while large allocations are done using numa_alloc_onnode . The peak memory utilization for small and large allocations is respectively 133 MB and 27 GB for Llama2-7b, and 168 MB and 52 GB for Llama-13b, making small allocations negligible ( < 1 % absent percent 1 <1\% < 1 % ).

We show results in Figure 17 . Memory access speed plays a fundamental role in the throughput. Compared to purely memory-bound synthetic workloads, however, the difference in performance is less dramatic. We also show the baseline performance. Our allocator is slower as it incurs synchronization overheads for large allocations.

Refer to caption

NVIDIA Collectives Communication Library (NCCL) is a host library that implements various communication primitives for NVIDIA GPUs. It supports multi-GPU setups, both single-node and multi-node, and makes use of PCIe, NVLink, and networking transparently. It provides the building blocks necessary to develop large-scale multi-GPU applications.

We show our results for the all reduce and all gather operations. Bandwidth is calculated as the size of the buffer over the time it takes to complete the operation. In our tests system allocated memory on HBM and memory allocated through cudaMalloc showed the same performance. In Figure 18 we show the performance scalability with increasing buffer size when running four processes on the same node. Our results show the importance of locality, with same-GH200 memory greatly outperforming peer access, and HBM and DDR showing similar throughput.

In Figure 19 we show the performance scalability with an increasing number of nodes participating in the collective, with four processes per node. The size of the buffer for the all-reduce operation is 4 GB, while the size of the buffer for the all-gather operation is 16 MB times the number of processes participating in the collective. Peer DDR memory access severely limits the performance of the collectives. In all other cases, performance differences are negligible.

Our results show that Superchip locality, more than the type of memory used, plays an important role in applications making heavy use of collective operations across multiple processes.

V Conclusions

This paper offers a comprehensive view of the memory hierarchy within the Quad GH200 node configuration of the Alps supercomputer. We conduct benchmarks on read, write, and copy operations across all combinations of physical memory allocations and processing units. Our analysis relates the measured performance to the theoretical bounds provided by the datapaths of the individual operations. Additionally, we present performance figures for example applications, highlighting the significance of data placement and memory access patterns for memory-bound workloads.

We argue that despite the sophisticated memory system of the Quad GH200 node, looking at the system in terms of individual interconnected Superchips is crucial to achieving good performance. The C2C interconnect lives up to its promise and opens up possibilities for the development of heterogeneous applications mixing CPU and GPU computations, and for effectively extending the pool of memory available to PUs.

  • [1] S. Matsuoka, T. Aoki, T. Endo, A. Nukada, T. Kato, and A. Hasegawa, “Gpu accelerated computing–from hype to mainstream, the rebirth of vector computing,” in Journal of Physics: Conference Series , vol. 180, p. 012043, IOP Publishing, 2009.
  • [2] C. A. Navarro, N. Hitschfeld-Kahler, and L. Mateu, “A survey on parallel computing and its applications in data-parallel problems using gpu architectures,” Communications in Computational Physics , vol. 15, no. 2, pp. 285–329, 2014.
  • [3] M. A. Giorgetta, W. Sawyer, X. Lapillonne, P. Adamidis, D. Alexeev, V. Clément, R. Dietlicher, J. F. Engels, M. Esch, H. Franke, et al. , “The icon-a model for direct qbo simulations on gpus (version icon-cscs: baf28a514),” Geoscientific Model Development , vol. 15, no. 18, pp. 6985–7016, 2022.
  • [4] M. Isaev, N. McDonald, and R. Vuduc, “Scaling infrastructure to support multi-trillion parameter llm training,” in Architecture and System Support for Transformer Models (ASSYST@ ISCA 2023) , 2023.
  • [5] S. Kato, J. Aumiller, and S. Brandt, “Zero-copy i/o processing for low-latency gpu computing,” in Proceedings of the ACM/IEEE 4th International Conference on Cyber-Physical Systems , pp. 170–178, 2013.
  • [6] B. Van Werkhoven, J. Maassen, F. J. Seinstra, and H. E. Bal, “Performance models for cpu-gpu data transfers,” in 2014 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing , pp. 11–20, IEEE, 2014.
  • [7] M. Bauer, H. Cook, and B. Khailany, “Cudadma: optimizing gpu memory bandwidth via warp specialization,” in Proceedings of 2011 international conference for high performance computing, networking, storage and analysis , pp. 1–11, 2011.
  • [8] A. Li, S. L. Song, J. Chen, J. Li, X. Liu, N. R. Tallent, and K. J. Barker, “Evaluating modern gpu interconnect: Pcie, nvlink, nv-sli, nvswitch and gpudirect,” IEEE Transactions on Parallel and Distributed Systems , vol. 31, no. 1, pp. 94–110, 2019.
  • [9] L. Zhang, M. Wahib, P. Chen, J. Meng, X. Wang, T. Endo, and S. Matsuoka, “Perks: a locality-optimized execution model for iterative memory-bound gpu applications,” in Proceedings of the 37th International Conference on Supercomputing , pp. 167–179, 2023.
  • [10] S. Mittal and J. S. Vetter, “A survey of cpu-gpu heterogeneous computing techniques,” ACM Computing Surveys (CSUR) , vol. 47, no. 4, pp. 1–35, 2015.
  • [11] J. Shen, A. L. Varbanescu, H. Sips, M. Arntzen, and D. G. Simons, “Glinda: A framework for accelerating imbalanced applications on heterogeneous platforms,” in Proceedings of the ACM International Conference on Computing Frontiers , pp. 1–10, 2013.
  • [12] J. Shen, A. L. Varbanescu, and H. Sips, “Look before you leap: Using the right hardware resources to accelerate applications,” in 2014 IEEE Intl Conf on High Performance Computing and Communications, 2014 IEEE 6th Intl Symp on Cyberspace Safety and Security, 2014 IEEE 11th Intl Conf on Embedded Software and Syst (HPCC, CSS, ICESS) , pp. 383–391, IEEE, 2014.
  • [13] J. Shen, A. L. Varbanescu, Y. Lu, P. Zou, and H. Sips, “Workload partitioning for accelerating applications on heterogeneous platforms,” IEEE Transactions on Parallel and Distributed Systems , vol. 27, no. 9, pp. 2766–2780, 2015.
  • [14] P. Mistry, Y. Ukidave, D. Schaa, and D. Kaeli, “Valar: A benchmark suite to study the dynamic behavior of heterogeneous systems,” in Proceedings of the 6th Workshop on General Purpose Processor Using Graphics Processing Units , pp. 54–65, 2013.
  • [15] J. A. Stratton, C. Rodrigues, I.-J. Sung, N. Obeid, L.-W. Chang, N. Anssari, G. D. Liu, and W.-m. W. Hwu, “Parboil: A revised benchmark suite for scientific and commercial throughput computing,” Center for Reliable and High-Performance Computing , vol. 127, no. 7.2, 2012.
  • [16] A. Danalis, G. Marin, C. McCurdy, J. S. Meredith, P. C. Roth, K. Spafford, V. Tipparaju, and J. S. Vetter, “The scalable heterogeneous computing (shoc) benchmark suite,” in Proceedings of the 3rd workshop on general-purpose computation on graphics processing units , pp. 63–74, 2010.
  • [17] S. Che, M. Boyer, J. Meng, D. Tarjan, J. W. Sheaffer, S.-H. Lee, and K. Skadron, “Rodinia: A benchmark suite for heterogeneous computing,” in 2009 IEEE international symposium on workload characterization (IISWC) , pp. 44–54, Ieee, 2009.
  • [18] J. Li, Y. Wang, X. Liang, and H. Liu, “Automatic blas offloading on unified memory architecture: A study on nvidia grace-hopper,” in Practice and Experience in Advanced Research Computing 2024: Human Powered Computing , pp. 1–5, 2024.
  • [19] G. Schieffer, J. Wahlgren, J. Ren, J. Faj, and I. Peng, “Harnessing integrated cpu-gpu system memory for hpc: a first look into grace hopper,” arXiv preprint arXiv:2407.07850 , 2024.
  • [20] J. D. McCalpin et al. , “Memory bandwidth and machine balance in current high performance computers,” IEEE computer society technical committee on computer architecture (TCCA) newsletter , vol. 2, no. 19-25, 1995.
  • [21] D. Unat, A. Dubey, T. Hoefler, J. Shalf, M. Abraham, M. Bianco, B. L. Chamberlain, R. Cledat, H. C. Edwards, H. Finkel, et al. , “Trends in data locality abstractions for hpc systems,” IEEE Transactions on Parallel and Distributed Systems , vol. 28, no. 10, pp. 3007–3020, 2017.
  • [22] T. Ben-Nun, J. de Fine Licht, A. N. Ziogas, T. Schneider, and T. Hoefler, “Stateful dataflow multigraphs: A data-centric model for performance portability on heterogeneous architectures,” in Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis , pp. 1–14, 2019.
  • [23] A. Ivanov, N. Dryden, T. Ben-Nun, S. Li, and T. Hoefler, “Data movement is all you need: A case study on optimizing transformers,” Proceedings of Machine Learning and Systems , vol. 3, pp. 711–732, 2021.
  • [24] Y. Wei, Y. C. Huang, H. Tang, N. Sankaran, I. Chadha, D. Dai, O. Oluwole, V. Balan, and E. Lee, “9.3 nvlink-c2c: A coherent off package chip-to-chip interconnect with 40gbps/pin single-ended signaling,” in 2023 IEEE International Solid-State Circuits Conference (ISSCC) , pp. 160–162, IEEE, 2023.
  • [25] “Amba chi architecture specification.” https://developer.arm.com/documentation/ihi0050/latest/ .
  • [26] “Arm® neoverse™ v2 core technical reference manual.” https://developer.arm.com/documentation/102375/latest/ .
  • [27] D. De Sensi, S. Di Girolamo, K. H. McMahon, D. Roweth, and T. Hoefler, “An in-depth analysis of the slingshot interconnect,” in SC20: International Conference for High Performance Computing, Networking, Storage and Analysis , pp. 1–14, IEEE, 2020.
  • [28] J. Kim, W. J. Dally, S. Scott, and D. Abts, “Technology-driven, highly-scalable dragonfly topology,” ACM SIGARCH Computer Architecture News , vol. 36, no. 3, pp. 77–88, 2008.
  • [29] T. Brecht, “On the importance of parallel application placement in numa multiprocessors,” in Symposium on Experiences with Distributed and Multiprocessor Systems (SEDMS IV) , pp. 1–18, 1993.
  • [30] S. Ramos and T. Hoefler, “Capability models for manycore memory systems: A case-study with xeon phi knl,” in 2017 IEEE International Parallel and Distributed Processing Symposium (IPDPS) , pp. 297–306, IEEE, 2017.
  • [31] U. Milic, O. Villa, E. Bolotin, A. Arunkumar, E. Ebrahimi, A. Jaleel, A. Ramirez, and D. Nellans, “Beyond the socket: Numa-aware gpus,” in Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture , pp. 123–135, 2017.
  • [32] M. Dashti and A. Fedorova, “Analyzing memory management methods on integrated cpu-gpu systems,” in Proceedings of the 2017 ACM SIGPLAN International Symposium on Memory Management , pp. 59–69, 2017.
  • [33] L. Wang, J. Ye, Y. Zhao, W. Wu, A. Li, S. L. Song, Z. Xu, and T. Kraska, “Superneurons: Dynamic gpu memory management for training deep neural networks,” in Proceedings of the 23rd ACM SIGPLAN symposium on principles and practice of parallel programming , pp. 41–53, 2018.
  • [34] S. Haria, M. D. Hill, and M. M. Swift, “Devirtualizing memory in heterogeneous systems,” in Proceedings of the Twenty-Third International Conference on Architectural Support for Programming Languages and Operating Systems , pp. 637–650, 2018.
  • [35] “Address translation services.” https://developer.arm.com/documentation/ 109242/0100/Operation-of-an-SMMU/Address-Translation-Services .
  • [36] R. Gayatri, K. Gott, and J. Deslippe, “Comparing managed memory and ats with and without prefetching on nvidia volta gpus,” in 2019 IEEE/ACM Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS) , pp. 41–46, IEEE, 2019.
  • [37] C. Pearson, A. Dakkak, S. Hashash, C. Li, I.-H. Chung, J. Xiong, and W.-M. Hwu, “Evaluating characteristics of cuda communication primitives on high-bandwidth interconnects,” in Proceedings of the 2019 ACM/SPEC International Conference on Performance Engineering , pp. 209–218, 2019.
  • [38] “Multichase - a pointer chaser benchmark.” https://github.com/google/multichase .
  • [39] S. Markidis, S. W. Der Chien, E. Laure, I. B. Peng, and J. S. Vetter, “Nvidia tensor core programmability, performance & precision,” in 2018 IEEE international parallel and distributed processing symposium workshops (IPDPSW) , pp. 522–531, IEEE, 2018.
  • [40] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems , vol. 30, 2017.
  • [41] T. Hoefler, D. Alistarh, T. Ben-Nun, N. Dryden, and A. Peste, “Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks,” Journal of Machine Learning Research , vol. 22, no. 241, pp. 1–124, 2021.
  • [42] E. Frantar, S. Ashkboos, T. Hoefler, and D. Alistarh, “Gptq: Accurate post-training quantization for generative pre-trained transformers,” arXiv preprint arXiv:2210.17323 , 2022.
  • [43] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, et al. , “Llama 2: Open foundation and fine-tuned chat models,” arXiv preprint arXiv:2307.09288 , 2023.
  • [44] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al. , “Pytorch: An imperative style, high-performance deep learning library,” Advances in neural information processing systems , vol. 32, 2019.

IMAGES

  1. IBM: CASE STUDY by Alan Lau on Prezi

    case study on ibm

  2. ibm case study strategic management

    case study on ibm

  3. IBM Case Study

    case study on ibm

  4. IBM Case Study

    case study on ibm

  5. Ibm case study

    case study on ibm

  6. Award Winning Case Study: IBM Improving Customer Experience

    case study on ibm

VIDEO

  1. IBM

  2. module-3 Case study​–IBM Envizi–Data analytics for greenhouse emissions||Make an Impact with DA #ibm

  3. IBM Smarter Cities Case Study

COMMENTS

  1. Cloud case studies

    The solution's IBM Cloud public hosting platform reduces operating costs for the app by 40 percent and scales effortlessly as its user base continues to grow. Read the case study LogDNA. LogDNA saw a clear need to address data sprawl in the modern, cloud-native development stack. Its innovative software-as-a-service (SaaS) platform built on ...

  2. Home Case Studies Search Featured Case Study: Active International

    Home Case Studies Search Featured Case Study: Active International. Growing client revenue through high-quality, targeted media campaigns. Learn more View more case studies.

  3. Full Case Study on IBM Marketing Strategy & Competitors Analysis

    It changed its name to IBM in 1924. IBM largely deals in Hardware, Software, Consultancy, and Hosting services. IBM has had an extensive journey so far, having managed to stay in the market for almost a century now. In the IBM case study, we shall talk about IBM's marketing strategy, marketing mix, competitors' analysis, BCG matrix ...

  4. Case Study of IBM: Employee Training through E-Learning

    The company reportedly saved about $166 million within one year of implementing the e-learning program for training its employees all over the world. The figure rose to $350 million in 2001. During this year, IBM reported a return on investment (ROI)'s of 2284 percent from its Basic Blue e-Learning program.

  5. IBM Supply Chain

    IBM employs supply chain staff in 40 countries and makes hundreds of thousands of customer deliveries and service calls in over 170 nations. IBM also collaborates with hundreds of suppliers across its multi-tier global network to build highly configurable and customized products to customer specifications. Previously, the IBM supply chain ran ...

  6. IBM Change Management Case Study

    In this blog post, we will take a closer look at IBM's change management case study, examining its background, change management strategy, and results. Brief History and Growth of IBM IBM, also known as International Business Machines Corporation, is an American multinational technology company that was founded in 1911.

  7. PDF Responsible Use of Technology: The IBM Case Study

    The IBM Case Study", marks the second in a series, following a White Paper on Microsoft. We would like to thank IBM for sharing their ethical technology governance structure, practices, tools, activities and research expertise for this effort. It is our hope that this document and the Responsible Use of

  8. Rebooting Work for a Digital Era

    This case study describes HR transformation at IBM. It is particularly instructive for companies embarking on their own HR digital transformation efforts. IBM's most important lessons are less about the specific solutions they introduced and more about the way they went about finding their new philosophy and their new operating model.

  9. PDF From reengineering to reinvention: the IBM journey to becoming an On

    IBM Case Study March 2005. Executive summary After a highly visible fall from the heights of information technology leadership in the early 1990s, IBM is healthy and growing again. The new millennium has brought record revenue and established IBM as the leader in servers, middle-

  10. PDF Case Study: IBM Strengthens Focus on Project Management

    IBM has made an ongoing commitment to project management excellence. The journey began in the early to mid-nineties when we transformed our culture and support systems to improve business posture. The company took bold steps in how to organize, execute and track work. We also began to group work into projects that produced services, products ...

  11. Strategy Study: How IBM Became A Multinational Giant Through Multiple

    IBM stands for International Business Machines Corporation and is a multinational technology corporation with over 100 years of history and multiple inventions that are prevalent today. Its headquarters are in Armonk, New York, but it operates in over 170 countries. Institutional investors own over 55% of IBM, while around 30% belongs to mutual ...

  12. PDF The Learning System at IBM: A Case Study

    The Learning System at IBM: A Case Study Fei Qin and Thomas A. Kochan1 December 3, 2020 1 Fei Qin is a Faculty Affiliate of the Good Companies, Good Jobs Initiative at MIT and an Associate Professor in Management at University of Bath. Thomas A. Kochan is the George M. Bunker Professor at the MIT

  13. Waking Up IBM: How a Gang of Unlikely Rebels Transformed Big Blue

    By the end of 1994, Lou Gerstner's first full year as CEO, the company had racked up $ 15 billion in cumulative losses over the previous three years, and its market cap had plummeted from a high ...

  14. Responsible Use of Technology: The IBM Case Study

    Responsible Use of Technology: The IBM Case Study. The World Economic Forum Responsible Use of Technology project aims to provide practical resources for organizations to operationalize ethics in their use of technology. This White Paper is the second in a series that highlights processes, tools and organizational constructs that facilitate the ...

  15. 3 lessons from IBM on designing responsible, ethical AI

    Below are the key lessons learned from our research, along with a brief overview of IBM's historical journey towards ethical technology. 1. Trusting your employees to think and act ethically. When Francesca Rossi joined IBM in 2015 with the mandate to work on AI ethics, she convened 40 colleagues to explore this topic.

  16. Case Study

    What asset-intensive industries can gain using Enterprise Asset Management. 3 min read - Not that long ago, asset-intensive organizations took a strictly "pen and paper" approach to maintenance checks and inspections of physical assets. Inspectors walked along an automobile assembly line, manually taking notes in an equipment maintenance log.

  17. IBM: Design Thinking

    Abstract. This case describes the 2012-2020 effort at IBM to implement design thinking throughout the company and hire thousands of designers to serve on every product team alongside technical engineers and developers and product managers. IBM's design transformation is told through the development of the Design Program Office—a new ...

  18. 1388 IBM Case Studies, Success Stories, & Customer Stories

    Streamlining lease accounting, while delivering cost-effective technology for its workforce. David Keavney Director of IT Asset Management. Read Case Study. 1. 2. 3. Narrow down 1388 case studies by company size & industry to find out how IBM works for a business like yours.

  19. Case studies

    Watch the case study. Re-thinking the world's most complex supply chains. Read the story. See why global institutions partner with IBM Quantum to explore quantum computing applications and build skills.

  20. IBM

    IBM | Case Study An intelligent ABM targeting option for IBM. As a new tool from IBM, Watson Analytics visualizes data for its users, automatically creating charts and tables, and facilitating quick analyses on what it finds as strong data-proven information. IBM's former Director of Performance Media was looking to generate more leads at lower costs compared to business-as-usual tactics.

  21. Case study: IBM fuels collaboration and innovation at their ...

    Our case studies share their unique stories. The challenge: creating the ideal space for teams and clients to work together. In Chicago, IBM was looking for a new way to work with one of their biggest clients. Rather than always meeting at the IBM office, Romas Pencyla, vice president and partner at IBM, envisioned an inspiring offsite environment.

  22. Case studies

    Read the case study Selta Square. The startup automates a core process for its first-of-its-kind drug safety monitoring by using IBM Robotic Process Automation. Read the case study New Mexico Mutual. The insurer frees employees from repetitive tasks by using IBM Robotic Process Automation, saving 3.5 hours a day. Read the case study Aon Italy.

  23. IBM: Consolidating Processes and Data to Drive Services ...

    IBM C&CS turned to Planview PSA to provide a single global set of financial forecasting and project management tools to also accelerate the overall quote to cash process. With Planview PSA, C&CS was able to merge data from several disconnected backend systems. ... More Case Studies. Benify. Benify Scales Services Internationally While ...

  24. Blockchain use cases

    IPwe helps companies make better use of their intellectual property. Yet the IP transaction platform saw inefficiencies and a lack of transparency in the ecosystem. With IBM Blockchain and AI, it created a suite of products to increase visibility and flexibility within the patent marketplace. Read the case study

  25. IBM Study: C-Suite Confidence in Delivering Basic IT Services Wanes

    The global study* of 2,500 of C-level technology executives (tech CxOs) from 34 countries revealed that less than half (47%) of those surveyed think their IT organization is effective in basic services compared to 69% surveyed in 2013. Today, only 36% of surveyed CEOs and 50% of surveyed CFOs believe IT is effective at basic services, down from 64% and 60%, respectively since 2013.

  26. Understanding Data Movement in Tightly Coupled Heterogeneous Systems: A

    Heterogeneous platforms are dominant in modern-day large-scale computing. GPUs are ubiquitous in the fields of HPC and AI and have permitted the growth of workloads to unprecedented scales [1, 2, 3].Recent breakthroughs in generative AI are made possible by the availability of computational resources, with the need for memory and computation growing steadily.