OpenStack Installation on a Single Physical Server - A Complete Beginner's Guide
Introduction: Why OpenStack?
OpenStack stands as the world’s most widely deployed open-source cloud computing platform, powering some of the largest public and private clouds across the globe. From CERN’s massive research infrastructure to countless enterprise data centers, OpenStack provides the foundation for Infrastructure-as-a-Service (IaaS) deployments that rival proprietary solutions like AWS, Azure, and Google Cloud Platform.
For learners, developers, and small organizations, OpenStack offers something uniquely valuable: the ability to build and understand cloud infrastructure from the ground up, completely free of licensing costs. This guide focuses on deploying OpenStack on a single physical server – the perfect starting point for learning, development, and testing before scaling to multi-node production environments.
Understanding Single-Node OpenStack
A single-node deployment (also called “all-in-one”) runs all OpenStack services on one physical machine. This architecture serves multiple purposes:
- Learning Environment: Understand OpenStack components without complex networking
- Development Platform: Test applications in a real cloud environment locally
- Proof of Concept: Validate OpenStack for your use case before hardware investment
- Small Workloads: Host lightweight applications for personal or small team use
Limitations to Consider
Single-node deployments have inherent constraints:
- No High Availability: Server failure means complete service outage
- Limited Scalability: Cannot add compute capacity beyond single machine
- Resource Competition: All services compete for CPU, RAM, and disk
- No True Isolation: Network isolation is simulated rather than physical
Prerequisites and System Requirements
Hardware Requirements
For a functional single-node OpenStack deployment, your physical server should meet these minimum specifications:
| Component | Minimum | Recommended |
|---|---|---|
| CPU | 4 cores | 8+ cores (with virtualization support) |
| RAM | 8 GB | 16-32 GB |
| Disk | 100 GB | 500 GB+ SSD |
| Network | 1 NIC | 2 NICs (management + tenant networks) |
Critical: Ensure your CPU supports virtualization extensions:
- Intel: VT-x (check with
grep vmx /proc/cpuinfo) - AMD: AMD-V (check with
grep svm /proc/cpuinfo)
Operating System
This guide uses CentOS Stream 9 or Red Hat Enterprise Linux 9 as the base operating system. Why?
- Best compatibility with OpenStack components
- Comprehensive documentation and community support
- Native SELinux integration for security
- Stable package ecosystem
Network Configuration
Plan your network topology before installation:
- Management Network:
192.168.1.0/24(access to OpenStack dashboard and APIs) - Provider Network:
192.168.100.0/24(external connectivity for instances) - Self-service Networks: Internal tenant networks managed by OpenStack
Step 1: Prepare Your CentOS/RHEL Server
1.1 Update the System
Start with a fully updated system:
1 | sudo dnf update -y |
Verify your system after reboot:
1 | cat /etc/os-release | head -n 5 |
1.2 Configure Hostname and Hosts
Set a descriptive hostname:
1 | sudo hostnamectl set-hostname openstack-controller.localdomain |
Edit /etc/hosts to include your IP addresses:
1 | sudo tee -a /etc/hosts << EOF |
Verify hostname resolution:
1 | ping -c 3 openstack-controller |
1.3 Disable SELinux (for Learning Environment)
For production, configure SELinux properly. For learning, temporarily disable:
1 | sudo setenforce 0 |
1.4 Disable Firewalld (Alternative: Configure Rules)
Option A - Disable firewalld (simpler for learning):
1 | sudo systemctl stop firewalld |
Option B - Configure proper rules (better practice):
1 | sudo firewall-cmd --add-service=ssh --permanent |
1.5 Configure NetworkManager
Ensure NetworkManager manages all interfaces:
1 | sudo systemctl enable NetworkManager |
Step 2: Enable Required Repositories
2.1 Add OpenStack Repository
For CentOS Stream 9, enable the OpenStack repository:
1 | # Enable CentOS Stream repositories |
Update package index after adding repositories:
1 | sudo dnf update -y |
2.2 Enable Virtualization Support
Install and enable virtualization packages:
1 | sudo dnf install -y @virtualization |
Verify virtualization support:
1 | sudo virt-host-validate |
You should see “SUCCESS” for QEMU and KVM validation.
Step 3: Install Packstack (RDO - OpenStack Distribution)
Packstack provides the easiest installation path for beginners. It uses Puppet modules to automate OpenStack deployment.
3.1 Install Packstack and Dependencies
1 | sudo dnf install -y openstack-packstack python3-openstackclient |
Verify installation:
1 | packstack --version |
3.2 Generate Answer File
Packstack uses an answer file to configure installation parameters:
1 | # Generate default answer file |
This creates a template file at /root/packstack-answers.txt with all configuration options.
Step 4: Configure Packstack for Single-Node Deployment
4.1 Edit Answer File
Open the answer file for editing:
1 | sudo nano /root/packstack-answers.txt |
Modify these critical parameters:
1 | # General configuration |
Important: Replace 192.168.1.100 with your server’s actual IP address. Replace eth1 with your actual external network interface name (find with ip link show).
4.2 Alternative: Use Command-Line Options
Instead of editing the answer file, you can specify options directly:
1 | sudo packstack \ |
4.3 Start the Installation
Begin the installation process. This takes 30-60 minutes:
1 | sudo packstack --answer-file=/root/packstack-answers.txt |
Monitor progress carefully. The installer displays each component’s status.
Step 5: Post-Installation Configuration
5.1 Verify Installation Success
Check all OpenStack services:
1 | sudo systemctl status openstack-nova-api |
5.2 Access the Horizon Dashboard
Open your browser and navigate to:
1 | http://192.168.1.100/dashboard |
- Username: admin
- Password: YourSecureAdminPassword123!
5.3 Configure External Networks
Log in to Horizon and configure provider networks:
- Navigate to Admin → Networks → Create Network
- Set name: “ext-net”
- Network type: Flat
- External Network: Yes
- Subnet: 192.168.100.0/24
- Gateway: 192.168.100.1
5.4 Launch Your First Instance
Create initial virtual machine:
- Upload an image: Project → Compute → Images → Create Image
- Choose CirrOS (lightweight test image) or CentOS
- Create network with subnet
- Launch instance with selected flavor
Verify connectivity:
1 | openstack server list |
Step 6: Troubleshooting Common Issues
Issue: Packstack fails with network errors
Solution: Verify firewall and SELinux settings:
1 | sudo setenforce 0 |
Issue: Services won’t start
Solution: Check logs:
1 | sudo tail -f /var/log/messages |
Issue: Cannot access Horizon
Solution: Verify Apache and memcached:
1 | sudo systemctl restart httpd |
Issue: Instance creation fails
Solution: Check compute service:
1 | sudo nova-status upgrade check |
Best Practices for Single-Node OpenStack
Resource Management
Limit resource-intensive services:
- Disable unnecessary meters (Ceilometer)
- Configure Nova scheduler to optimize single-host placement
- Set appropriate overcommit ratios
Backup Strategy
Backup critical data before upgrades:
1 | sudo mysqldump -u root keystone > keystone-backup.sql |
Monitoring
Monitor resource utilization:
1 | # Check allocated vs actual resources |
Conclusion
You now have a fully functional OpenStack cloud running on a single physical server. While this deployment won’t match the scale of multi-node production environments, it provides an invaluable learning platform for understanding cloud infrastructure concepts, testing applications, and validating OpenStack’s capabilities before larger investments.
As your needs grow, this foundation can expand into a multi-node architecture. The skills you’ve developed—from network configuration to service management—transfer directly to larger-scale deployments.
Experiment freely, build virtual machines, explore the dashboard, and deepen your understanding of cloud computing fundamentals. This single-node OpenStack instance is your sandbox for cloud innovation.
What will you build first? A web server cluster, a container platform, or perhaps a testing environment for your applications? The possibilities within your private cloud await exploration.
Have questions about expanding beyond single-node or optimizing your current deployment? Feel free to share your experiences and challenges in the comments.