Quantcast
Channel: Ask OpenStack: Q&A Site for OpenStack Users and Developers - Individual question feed
Viewing all articles
Browse latest Browse all 4

Answer by rooter for Hi, I am new to Openstack community. I need to implement autoscaling for my tomcat application. How to implement HAproxy as the default load balancer? Will it be created automatically once I launch my tomcat application instance from a heat template.? or should I manually create an HAProxy instance and later mention its details in my tomcat app heat template to use the HAProxy instance? Any help would be much appreciated. Best Regards, Muhammed Roshan

Previous: Comment by zaneb for Just start multiple heat-api processes. As many as you want. Then configure keystone with endpoint pointing to your haproxy host: [root@openstack1 ~]# keystone endpoint-list | grep 8004 | e41899cd971b437182f1be06ed98a129 | DefaultRegion | http://haproxyhost:8004/v1/$(tenant_id)s | http://haproxyhost:8004/v1 | http://haproxyhost:8004/v1/$(tenant_id)s | 7648a4b19fc64cbdb60e23aa42fa369a | Then point the haproxy to those heat-api processes. Here's one example haproxy config, but there are many valid way of doing it. Tweak the timeouts to your liking. root@haproxyhost: cat /etc/haproxy/haproxy.cfg global daemon defaults mode http log 127.0.0.1:514 local4 maxconn 10000 timeout connect 4s timeout client 180s timeout server 180s option redispatch retries 3 balance roundrobin listen heatAPI bind 0.0.0.0:8004 server heatnode1 heatnode1:8004 check inter 3s rise 2 fall 3 server heatnode2 heatnode2:8004 check inter 3s rise 2 fall 3 or, run multiple heat-api on the some node, under different ports: listen heatAPI bind 0.0.0.0:8004 server heatnode1 heatnode1:8004 check inter 3s rise 2 fall 3 server heatnode1 heatnode1:18004 check inter 3s rise 2 fall 3 note, use "mode tcp" instead of "mode http" is also possible, and results in better performance.
$
0
0
Just start multiple heat-api processes. As many as you want. Then configure keystone with endpoint pointing to your haproxy host: [root@openstack1 ~]# keystone endpoint-list | grep 8004 | e41899cd971b437182f1be06ed98a129 | DefaultRegion | http://haproxyhost:8004/v1/$(tenant_id)s | http://haproxyhost:8004/v1 | http://haproxyhost:8004/v1/$(tenant_id)s | 7648a4b19fc64cbdb60e23aa42fa369a | Then point the haproxy to those heat-api processes. Here's one example haproxy config, but there are many valid way of doing it. Tweak the timeouts to your liking. root@haproxyhost: cat /etc/haproxy/haproxy.cfg global daemon defaults mode http log 127.0.0.1:514 local4 maxconn 10000 timeout connect 4s timeout client 180s timeout server 180s option redispatch retries 3 balance roundrobin listen heatAPI bind 0.0.0.0:8004 server heatnode1 heatnode1:8004 check inter 3s rise 2 fall 3 server heatnode2 heatnode2:8004 check inter 3s rise 2 fall 3 or, run multiple heat-api on the some node, under different ports: listen heatAPI bind 0.0.0.0:8004 server heatnode1 heatnode1:8004 check inter 3s rise 2 fall 3 server heatnode1 heatnode1:18004 check inter 3s rise 2 fall 3 note, use "mode tcp" instead of "mode http" is also possible, and results in better performance.

Viewing all articles
Browse latest Browse all 4

Trending Articles