Quantcast
Channel: Ask OpenStack: Q&A Site for OpenStack Users and Developers - Individual question feed
Viewing all articles
Browse latest Browse all 4

Answer by zaneb for Hi, I am new to Openstack community. I need to implement autoscaling for my tomcat application. How to implement HAproxy as the default load balancer? Will it be created automatically once I launch my tomcat application instance from a heat template.? or should I manually create an HAProxy instance and later mention its details in my tomcat app heat template to use the HAProxy instance? Any help would be much appreciated. Best Regards, Muhammed Roshan

Next: Comment by zaneb for Just start multiple heat-api processes. As many as you want. Then configure keystone with endpoint pointing to your haproxy host: [root@openstack1 ~]# keystone endpoint-list | grep 8004 | e41899cd971b437182f1be06ed98a129 | DefaultRegion | http://haproxyhost:8004/v1/$(tenant_id)s | http://haproxyhost:8004/v1 | http://haproxyhost:8004/v1/$(tenant_id)s | 7648a4b19fc64cbdb60e23aa42fa369a | Then point the haproxy to those heat-api processes. Here's one example haproxy config, but there are many valid way of doing it. Tweak the timeouts to your liking. root@haproxyhost: cat /etc/haproxy/haproxy.cfg global daemon defaults mode http log 127.0.0.1:514 local4 maxconn 10000 timeout connect 4s timeout client 180s timeout server 180s option redispatch retries 3 balance roundrobin listen heatAPI bind 0.0.0.0:8004 server heatnode1 heatnode1:8004 check inter 3s rise 2 fall 3 server heatnode2 heatnode2:8004 check inter 3s rise 2 fall 3 or, run multiple heat-api on the some node, under different ports: listen heatAPI bind 0.0.0.0:8004 server heatnode1 heatnode1:8004 check inter 3s rise 2 fall 3 server heatnode1 heatnode1:18004 check inter 3s rise 2 fall 3 note, use "mode tcp" instead of "mode http" is also possible, and results in better performance.
$
0
0
If you want Heat to create a load balancer, you'll need to specify one in the template. You have two options. [`AWS::ElasticLoadBalancing::LoadBalancer`](http://docs.openstack.org/developer/heat/template_guide/cfn.html#AWS::ElasticLoadBalancing::LoadBalancer) will create a Nova instance running HAProxy on a probably-outdated version of Fedora that you'll have to provide an image for. The better option is [`OS::Neutron::LoadBalancer`](http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer), which will configure load balancing through the Neutron API (this assumes that you have the required Neutron plugins available, however). Reference the load balancer in the autoscaling group definition, and autoscaling will update the load balancer configuration when servers are added or removed.

Viewing all articles
Browse latest Browse all 4

Trending Articles