Vunet as non sudo user

Vunet as non sudo user

Installation as non sudo user:

Can we please make this as the default mode of installation in our product?  All customers have the same ask i.e they want our product to install as an application user and not a sudo user.


Below are the steps (also attached the same)

nonsudo

Download sample files for reference.
https://www.vunetsystems.com/_Downloads_/_icici_/icici-sudo-fix.tar.gz

1)Entries to be added in sudoers file

sudo visudo :
<<<<<<<<<<<
Cmnd_Alias HEARTBEAT = /bin/systemctl * heartbeat, /usr/sbin/service heartbeat *
Cmnd_Alias LOGSTASH = /bin/systemctl * logstash, /usr/sbin/service logstash *
Cmnd_Alias APACHE = /bin/systemctl * apache2, /usr/sbin/service apache2 *
Cmnd_Alias VIENNA = /bin/systemctl * kibana, /usr/sbin/service kibana *
Cmnd_Alias SSH_SERVICE = /bin/systemctl * ssh, /usr/sbin/service ssh *
Cmnd_Alias ES_SERVICE = /bin/systemctl * elasticsearch, /usr/sbin/service elasticsearch *
Cmnd_Alias REDIS_SERVICE = /bin/systemctl * redis_6379, /usr/sbin/service redis_6379 *
Cmnd_Alias CHOWN_VIENNA= /bin/chown kibana\:vunet -R /opt/kibana/src/ui/vienna_images/
Cmnd_Alias MKDIR_VIENNA= /bin/mkdir -p /opt/kibana/src/ui/vienna_images/1/1/

<<<<<<<

#Remove sudo vunet ALL=(ALL) NOPASSWD:ALL Add below line:

vunet ALL=(ALL) NOPASSWD:HEARTBEAT,LOGSTASH,APACHE,VIENNA,SSH_SERVICE,ES_SERVICE,REDIS_SERVICE,CHOWN_VIENNA,MKDIR_VIENNA

>>>>>>>>>

2)

sudo chown vunet:vunet -R /etc/heartbeat
sudo chown vunet:vunet -R /usr/share/heartbeat/
sudo chown vunet:vunet -R /etc/logstash
sudo chown vunet:vunet -R /usr/share/logstash
sudo chown kibana:vunet -R /opt/kibana/src/ui/vienna_images/
sudo chown vunet:vunet -R /etc/logstash/tables/

3)
modify the init file of logstash to run as vunet user (see the attached logstash-init-file)
modify the heartbeat init file to run as vunet user (see the attached heartbeat init-file)

4)
Following modifications in /etc/group file:

www-data:x:33:vunet,logstash,kibana
vunet:x:1000:www-data,logstash,kibana
logstash:x:999:vunet,www-data
kibana:x:9999:www-data,vunet

vunet:x:1000:www-data,elasticsearch,logstash
elasticsearch:x:118:vunet

5) cairo changes for data_source_config.py

/home/vunet/workspace/cairo/map/daq/data_source_config

#vim data_source_config.py

#remove ‘sudo’ from rsync command

cmd_str += ‘sudo rsync –include=”0*.conf” –include=”*.yml”‘ to

cmd_str += ‘sudo rsync –include=”0*.conf” –include=”*.yml”‘

‘–rsync-path=”sudo rsync” –chmod=F665 ‘ to ‘–rsync-path=”rsync” –chmod=F665 ‘ + \

 

6)Cairo Changes for data_config_mapping.yml (refer the attached data_config_mapping.yml)

change the “target-action:” value as per below

workspace/cairo/configs/data_config_mapping.yml

logstash-shipper:
default:
source-type: “File”
# Before taking an action if any action is to be taken
pre-action-function: “map.daq.data_source_config.data_source_config.pre_process”
dosync: True
# Folder within base location of master copy corresponding to the
# file type
source-folder: “logstash-shipper/”
description: “Logstash configuration in Shipper”
# List of machines to send data to
target: [“shipper-1”]
# Location within target machines
target-location: “/etc/logstash/conf.d/”
# Operation for which not to take any action
skip-post-action: [“DELETE”]
# Action scripts to be executed in target machine after data copy
target-action: [‘cat /var/run/logstash.pid | xargs -I {} kill -9 {} ; sudo service logstash restart’]
# Function to execute after action has been finished
post-action-function: “map.daq.data_source_config.data_source_config.post_process”
# Any script to be executed after the action
post-action-script: []

heartbeat:
default:
source-type: “File”
# Before taking an action if any action is to be taken
pre-action-function: “map.daq.data_source_config.data_source_config.pre_process”
dosync: True
# Folder within base location of master copy corresponding to the
# file type
source-folder: “heartbeat/”
description: “Heartbeat configuration”
# List of machines to send data to
target: [“shipper-1”]
# Location within target machines
target-location: “/etc/heartbeat/”
# Action scripts to be executed in target machine after data copy
#target-action: [‘sudo service heartbeat restart’]
#target-action: [‘cat /var/run/heartbeat.pid| xargs -I {} kill -9 {} ; sudo service heartbeat restart’]
#target-action: [‘ps -aef |grep “/bin/heartbeat -c” | grep -v grep | awk ‘{print $2}’| xargs -I{} kill -9 {} ; sudo service heartbeat restart’]
target-action: [ “/bin/ps -aef |grep /usr/share/heartbeat/bin/heartbeat | grep -v grep | awk ‘{print $2}’ | xargs -I {} kill -9 {} ; sudo service heartbeat restart” ]
# Function to execute after action has been finished
post-action-function: “map.daq.data_source_config.data_source_config.post_process”
# Any script to be executed after the action
post-action-script: []

images:
default:
source-type: “File”
# Before taking an action if any action is to be taken
pre-action-function: “”
dosync: True
make-tenant-bu-aware-sync: True
# Indicates that files which are deleted in the source
# are to be delete from the destination too
sync-deleted: True
# This uses a different base location than the globally defined one
base-location: “/opt/”
# Folder within base location of master copy corresponding to the
# file type
source-folder: “images/”
description: “Image management configuration”
# List of machines to send data to
target: [“shipper-1”]
# Location within target machines
target-location: “/opt/kibana/src/ui/vienna_images/”
# Action scripts to be executed in target machine after data copy
target-action: [‘sudo chown -R kibana:vunet /opt/kibana/src/ui/vienna_images/’ ]
#target-action: [‘chown -R kibana:kibana /opt/kibana/src/ui/vienna_images/’ ]
# Function to execute after action has been finished
post-action-function: “”
# Any script to be executed after the action
post-action-script: []

 

configuration-file-types:
data-enrichment:
default:
source-type: “File”
# Before taking an action if any action is to be taken
pre-action-function: “map.daq.data_source_config.data_source_config.pre_process”
dosync: True
# Folder within base location of master copy corresponding to the
# file type
source-folder: “data-enrichment/”
description: “Logstash Data Enrichment Files”
# List of machines to send data to
target: [“analyser-1”]
# Location within target machines
target-location: “/etc/logstash/tables/”
# Action scripts to be executed in target machine after data copy
target-action: []
# Function to execute after action has been finished
post-action-function: “map.daq.data_source_config.data_source_config.post_process”
# Any script to be executed after the action
post-action-script: []

    • Related Articles

    • How to recover Vunet User Password

      How to Reset Forgotten Root Password in Ubuntu    Firstly, you need to either power on or reboot your Ubuntu system. You should get a grub menu as shown below. If you are running your system on VirtualBox, press the ‘SHIFT’ key on the keyboard to ...
    • Unable to install logbeat version 6.8 on target server 599

      VuNet Systems Private Limited  Solution Document Ticket #599  Overview General/Customer specific RBL Author Rachitha H V Reviewer Seema Approver Sendil Release date Product Version 8.5r5 Audience: CSG/TechWarriors/PAC/Platform/Product teams ...
    • Why do I see Internal Service Error (HTTP Response 500) after login?

      There are two reasons because of which 500 Error can be seen. UI rendering service (kibana) is not running. Elasticsearch service is not running   1. UI Rendering Service Errors This can be checked using (vunetenv) vunet@ vunet-Pro:~$ sudo service ...
    • Bucket Size Increasing Solution Steps - 548

      Solution Document for Increasing the Bucket Size Verified By Tejaswi Botla tejaswi@vunetsystems.com Prepared By Naveen Sai Naveen.sai@vunetsystems.com NOTE : All the changes we have made in non-sudo user centos setup. Others can follow the same steps ...
    • Issue in URL Monitoring

      We had an issue with the URL monitoring, on checking we found out that the network team started blocking HTTP traffic from their location (where our stand alone shipper is), so instead of using the stand alone shipper we used our main shipper (Cloud) ...