redis可以作为缓存服务器,配合rails的cache,可以提升网站的速度
使用homebrew
brew install redis
安装完成后,会在控制台上显示设置开机启动的方法及启动redis的方法,重新复制在下面:
# 设置开机启动
ln -sfv /usr/local/opt/redis/*.plist ~/Library/LaunchAgents
launchctl load ~/Library/LaunchAgents/homebrew.mxcl.redis.plist
# 如何启动服务
redis-server /usr/local/etc/redis.conf
需要直接下载源码,自己编译安装:
# 下载源码
wget http://download.redis.io/redis-stable.tar.gz
# 解压源码压缩包
tar xvzf redis-stable.tar.gz
# 进入到解压目录,进行编译安装
sudo make install
# 配置redis
sudo ./utils/install_server.sh
# 输入命令后,可以一路默认,都直接回车,最后配置完成后显示一下配置内容
Port : 6379
Config file : /etc/redis/6379.conf
Log file : /var/log/redis_6379.log
Data dir : /var/lib/redis/6379
Executable : /usr/local/bin/redis-server
Cli Executable : /usr/local/bin/redis-cli
安装配置完毕后,我们就可以方便的使用一下命令启动、停止和重启redis服务
sudo service redis_6379 start[stop restart]
# 连接redis服务
redis-cli
# 连接到redis后,可以通过一下命令查看redis运行的数据
# 显示所有的运行数据
info
# 只显示redis的状态数据
info stats
作为缓存服务器,我们需要关心以下两个指标:
# 连接redis,输入info stats 查看以下两个指标:
keyspace_hits:28 # cache命中次数
keyspace_misses:10 # cache没有命中次数
# 通过以上两个数据就可以很方便的计算出cache得命中率: 28 * 100 / (28 + 10) = 73.7%
git 源码地址:
https://github.com/junegunn/redis-stat
用ruby写得一个简单的redis监控工具,具体使用方法,可以参看readme页面
在正式开始使用redis之前,建议进行如下的配置:
# 打开redis的config文件:MAC(/usr/local/etc/redis.conf),debian(/etc/redis/6379.conf)
# 限定最多使用的内存
maxmemory 1536mb
# 设置达到内存使用上限后,cache失效的策略
maxmemory-policy allkeys-lru
config.cache_store = :redis_store, "redis://localhost:6379/0/cache"
重启服务后,你就可以使用redis来存储我们的cache了,真的很简单。
ps:在development下,你需要在development.rb中打开perform_caching配置,不然cache是不会生效的
config.action_controller.perform_caching = true
TODO
这边帖子中讲的蛮好,就直接先放上链接吧:
https://ruby-china.org/topics/21488
http://redis.io/topics/quickstart
http://stackoverflow.com/questions/5636299/rails3-caching-in-development-mode-with-rails-cache-fetch
https://ruby-china.org/topics/21488
https://ruby-china.org/topics/22761
生产环境中得log如果不进行切分管理,那么其随着时间的推移,log文件的大小会越来越大,不便于日志管理
服务器环境:debian
通过简单的配置logrtate配置文件,就可以轻松的实现log的分割。打开/etc/logrotate.conf,添加以下配置到该文件的末尾
/path/to/your/rails/log/path/*.log {
daily
missingok
rotate 7
compress
delaycompress
notifempty
copytruncate
dateext
}
未设置的logrotate配置
/log/path/*.log {
postrotate
/path/to/script.sh
endscript
}
运行以下命令进行测试:
sudo /usr/sbin/logrotate -f /etc/logrotate.conf
如果log文件夹中只有production.log文件,执行一次以上shell命令后,就会多出切分的文件。
https://gorails.com/guides/rotating-rails-production-logs-with-logrotate
http://www.thegeekstuff.com/2010/07/logrotate-examples/
God是一个ruby的进程监控框架,可以方便的对你的rails 服务进程进行监控
gem install god
新建配置文件:RAILS_ROOT/config/unicorn.god
内容如下:
# 获取rails根目录,方便维护
RAILS_ROOT = File.dirname(File.dirname(__FILE__))
# 设置God发送邮件的配置
God::Contacts::Email.defaults do |d|
d.from_email = 'god@example.com'
d.from_name = 'God'
d.delivery_method = :sendmail
end
# 设置发送邮件的联系人
God.contact(:email) do |c|
c.name = 'Dev Team'
c.group = 'devteam'
c.to_email = 'devteam@example.com'
end
# 监控代码块
God.watch do |w|
# 设置自己环境下的unicorn pid文件路径
pid_file = File.join(RAILS_ROOT, "tmp/unicorn.pid")
w.name = "unicorn"
w.dir = RAILS_ROOT
w.interval = 60.seconds
# start stop restart 需要替换成自己的启动 停止和重启命令
w.start = "RAILS_ENV=production unicorn -c #{RAILS_ROOT}/config/unicorn.rb -D"
w.stop = "kill -s QUIT $(cat #{pid_file})"
w.restart = "kill -s HUP $(cat #{pid_file})"
w.start_grace = 20.seconds
w.restart_grace = 20.seconds
w.pid_file = pid_file
# 清除pid文件,不然无法重启
w.behavior(:clean_pid_file)
# When to start?
w.start_if do |start|
start.condition(:process_running) do |c|
# We want to check if deamon is running every ten seconds
# and start it if itsn't running
c.interval = 10.seconds
c.running = false
end
end
# When to restart a running deamon?
w.restart_if do |restart|
restart.condition(:memory_usage) do |c|
# Pick five memory usage at different times
# if three of them are above memory limit (100Mb)
# then we restart the deamon
c.above = 100.megabytes
c.times = [3, 5]
end
restart.condition(:cpu_usage) do |c|
# Restart deamon if cpu usage goes
# above 90% at least five times
c.above = 90.percent
c.times = 5
end
end
w.lifecycle do |on|
# Handle edge cases where deamon
# can't start for some reason
on.condition(:flapping) do |c|
c.to_state = [:start, :restart] # If God tries to start or restart
c.times = 5 # five times
c.within = 5.minute # within five minutes
c.transition = :unmonitored # we want to stop monitoring
c.retry_in = 10.minutes # for 10 minutes and monitor again
c.retry_times = 5 # we'll loop over this five times
c.retry_within = 2.hours # and give up if flapping occured five times in two hours
end
end
# 每次进程挂掉时,发送邮件
w.transition(:up, :start) do |on|
on.condition(:process_exits) do |c|
c.notify = 'devteam'
end
end
end
# 查看god的监控状态
god status
# 如何启动god
god -c config/unicorn.god
# 带上参数 -D 将不会进程化
god -c config/unicorn.god -D
# 查看监控log
god log unicorn
# 终止所有监控
god terminate
# 终止某一个监控
god stop unicorn
# 停止god,而不影响应用进程
god quit
1、无法启动god,用-D启动,报如下的错误:
ERROR: Socket drbunix:///tmp/god.17165.sock already in use by another instance of god
直接用god terminatem命令干掉所有god,再启动god就可以
2、本地Mac可以正常启动,但是到了线上debian环境,出现如下错误:
ERROR: Condition 'God::Conditions::ProcessExits' requires an event system but none has been loaded
需要用sudo权限才能load event system,因此,我这边采用了rvmsudo god 就可以正常的启动了
3、都配置好后,发现无法发报警邮件
检查一下你的服务器的邮件服务,check一下默认的邮件端口25是否打开等
# 用命令行测试一下,是否可以发出邮件
mail -s "hello" test@example.com
# 回车后,输入邮件正文,以 "." 号结束,回车再回车
Hi,
this is a test
.
Cc:
http://www.synbioz.com/blog/monitoring_server_processes_with_god
https://github.com/mojombo/god/issues/99
环境: debian 7
# 添加一个部署用户组
groupadd deployers
# 新建一个用户到该用户组
adduser deployer1 -ingroup deployers
# 上面的命令会提示你输入密码,和用户信息,密码一定要输,其他的信息就随意,空着也没事
##### 用vi(或其他你喜欢的编辑器)打开sudoers文件(/etc/sudoers),给用户组添加sudo权限,添加以下内容到该文件
%sudo ALL=(ALL:ALL) ALL
# 上面的语句后添加以下行:
%deployers ALL=(ALL:ALL) ALL
# 修改默认的端口,可选范围1024 ~ 65536
Port 123456
# 禁止root用户通过ssh 登陆
PermitRootLogin no
# 或者更加保险一点,再设置只有指定的用户才可以登陆
AllowUsers deployer1
ps:最好保留一个vps的登陆窗口,防止设置错误后,root用户和新建的用户都无法登陆,只能重置服务这个情况发生
# 重启ssh服务
sudo service ssh restart
# 测试是否已经生效
# 可以通过端口 123456 登陆,没有设置端口将登陆不了
ssh -p 123456 deployer1@xxx.xxx.xxx.xxx
# 验证以下root用户是否可以登陆
ssh -p 123456 root@xxx.xxx.xxx.xxx
ls ~/.ssh
# 没有找到就直接重新生成一个
ssh-keygen -C "your.email@example.com"
# 这个命令将会在你的服务器上,deployer1用户的home目录下.ssh/目录下创建一个authorized_keys文件,并把公钥存储到这个文件中
# 有些教程说是可以通过直接在服务器上创建这个文件,然后把本地的公钥复制黏贴到这个文件中,我试了一下,不成功,可能是哪里复制有问题,但用这个命令就可以
ssh-copy-id -i ~/.ssh/id_rsa.pub -p 123456 deployer1@xxx.xxx.xxx.xxx
此程序可以很方便的管理你的防火墙,其后台是iptable
# 安装ufw,如果没有安装 w
sudo apt-get install ufw
##### 启动ufw后,默认会关闭所有端口的连接,所以你需要通过一下的方式设置你需要访问的端口:
# 允许访问123456端口,协议包含:tcp和udp
sudo ufw allow 123456
# 如果服务器是用于网站用途,你需要打开80端口和443端口
sudo ufw allow 80
sudo ufw allow 443
更多的用法可以参考: http://wiki.ubuntu.org.cn/Ufw%E4%BD%BF%E7%94%A8%E6%8C%87%E5%8D%97
或者: sudo ufw -h
这里简单的罗列一下:
# 如果显示类似的以下信息,则说明你又数据盘
Disk /dev/xvdb xxx GB, xxxxx bytes
fdisk -S 56 /dev/xvdb
# 执行该命令后,你可以按照其提示一步一步进行设置
# 再用 fdisk -l 就可以查看刚分好区的云盘了
mkfs.ext3 /dev/xvdb1
# 命令行中执行以下命令,其中:/mnt 为目标挂载点,你也可以换成自己想要的挂载点,比如: /home/deployer1/data
echo '/dev/xvdb1 /mnt ext3 defaults 0 0' >> /etc/fstab
# 查看是否写入成功
cat /etc/fstab
mount -a
# 然后通过df -h 就可以查看到新添加的云盘了
ps:如果你再添加分区信息时,设置了自己的挂载点,你需要保证的自己的挂载点存在,如果不存在,就需要通过 mkdir新建一个目录
# 安装mysql
sudo apt-get install mysql-server
# ps 附上彻底卸载mysql的命令^_^
sudo apt-get autoremove --purge mysql-server mysql-server-5.0 mysql-common
grant all privileges on *.* to 'username'@'%' identified by 'password';
# 以上命令添加了一个用户到mysql,我们就可以用username 和 password登陆mysql
这个需要谨慎选择,有安全风险
# 打开mysql的端口,默认为3306端口
sudo ufw allow 3306
# mysql的配置文件(/etc/mysql/my.cnf)中,注释掉已下行
bind-address = 127.0.0.1
sudo service mysql stop
mv /var/lib/mysql /home/data/
找到: datadir = /var/lib/mysql
重新修改为: datadir = /home/data/mysql
sudo service mysql start
gem 'unicorn'
# Set the current app's path for later reference. Rails.root isn't available at
# this point, so we have to point up a directory.
app_path = File.expand_path(File.dirname(__FILE__) + '/..')
# The number of worker processes you have here should equal the number of CPU
# cores your server has.
worker_processes (ENV['RAILS_ENV'] == 'production' ? 4 : 1)
# You can listen on a port or a socket. Listening on a socket is good in a
# production environment, but listening on a port can be useful for local
# debugging purposes.
listen app_path + '/tmp/unicorn.sock', backlog: 64
# Time-out
timeout 300
# Set the working directory of this unicorn instance.
working_directory app_path
# Set the location of the unicorn pid file. This should match what we put in the
# unicorn init script later.
pid app_path + '/tmp/unicorn.pid'
# You should define your stderr and stdout here. If you don't, stderr defaults
# to /dev/null and you'll lose any error logging when in daemon mode.
stderr_path app_path + '/log/unicorn.log'
stdout_path app_path + '/log/unicorn.log'
# Load the app up before forking.
preload_app true
# Garbage collection settings.
GC.respond_to?(:copy_on_write_friendly=) &&
GC.copy_on_write_friendly = true
# If using ActiveRecord, disconnect (from the database) before forking.
before_fork do |server, worker|
defined?(ActiveRecord::Base) &&
ActiveRecord::Base.connection.disconnect!
end
# After forking, restore your ActiveRecord connection.
after_fork do |server, worker|
defined?(ActiveRecord::Base) &&
ActiveRecord::Base.establish_connection
end
这个时候,可以通过已下命令测试一下rails 采用unicorn启动
# 在你自己的rails根目录下执行,就可以启动rails了
unicorn -c config/unicorn.rb
#!/bin/sh
# File: /etc/init.d/unicorn
### BEGIN INIT INFO
# Provides: unicorn
# Required-Start: $local_fs $remote_fs $network $syslog
# Required-Stop: $local_fs $remote_fs $network $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: starts the unicorn web server
# Description: starts unicorn
### END INIT INFO
# Feel free to change any of the following variables for your app:
# ubuntu is the default user on Amazon's EC2 Ubuntu instances.
USER=deployer1
# Replace [PATH_TO_RAILS_ROOT_FOLDER] with your application's path. I prefer
# /srv/app-name to /var/www. The /srv folder is specified as the server's
# "service data" folder, where services are located. The /var directory,
# however, is dedicated to variable data that changes rapidly, such as logs.
# Reference https://help.ubuntu.com/community/LinuxFilesystemTreeOverview for
# more information.
APP_ROOT="/path/to/your/rails_root_path"
# Set the environment. This can be changed to staging or development for staging servers.
RAILS_ENV=production
# This should match the pid setting in $APP_ROOT/config/unicorn.rb.
PID=$APP_ROOT/tmp/unicorn.pid
# A simple description for service output.
DESC="Unicorn app - $RAILS_ENV"
# Unicorn can be run using `bundle exec unicorn` or `bin/unicorn`.
UNICORN="unicorn"
# Execute the unicorn executable as a daemon, with the appropriate configuration
# and in the appropriate environment.
UNICORN_OPTS="-c $APP_ROOT/config/unicorn.rb -E $RAILS_ENV -D"
CMD="RAILS_ENV=$RAILS_ENV $UNICORN $UNICORN_OPTS"
# Give your upgrade action a timeout of 60 seconds.
TIMEOUT=60
# end of custom options
# Store the action that we should take from the service command's first
# argument (e.g. start, stop, upgrade).
action="$1"
# Make sure the script exits if any variables are unset. This is short for
# set -o nounset.
set -u
# Set the location of the old pid. The old pid is the process that is getting replaced.
old_pid="$PID.oldbin"
# Make sure the APP_ROOT is actually a folder that exists. An error message from
# the cd command will be displayed if it fails.
cd $APP_ROOT || exit 1
# A function to send a signal to the current unicorn master process.
sig () {
test -s "$PID" && kill -$1 `cat $PID`
}
# Send a signal to the old process.
oldsig () {
test -s $old_pid && kill -$1 `cat $old_pid`
}
# A switch for handling the possible actions to take on the unicorn process.
case $action in
# Start the process by testing if it's there (sig 0), failing if it is,
# otherwise running the command as specified above.
start)
sig 0 && echo >&2 "$DESC is already running" && exit 0
su - $USER -c "$CMD"
;;
# Graceful shutdown. Send QUIT signal to the process. Requests will be
# completed before the processes are terminated.
stop)
sig QUIT && echo "Stopping $DESC" exit 0
echo >&2 "Not running"
;;
# Quick shutdown - kills all workers immediately.
force-stop)
sig TERM && echo "Force-stopping $DESC" && exit 0
echo >&2 "Not running"
;;
# Graceful shutdown and then start.
restart)
sig QUIT && echo "Restarting $DESC" && sleep 2 \
&& su - $USER -c "$CMD" && exit 0
echo >&2 "Couldn't restart."
;;
# Reloads config file (unicorn.rb) and gracefully restarts all workers. This
# command won't pick up application code changes if you have `preload_app
# true` in your unicorn.rb config file.
reload)
sig HUP && echo "Reloading configuration for $DESC" && exit 0
echo >&2 "Couldn't reload configuration."
;;
# Re-execute the running binary, then gracefully shutdown old process. This
# command allows you to have zero-downtime deployments. The application may
# spin for a minute, but at least the user doesn't get a 500 error page or
# the like. Unicorn interprets the USR2 signal as a request to start a new
# master process and phase out the old worker processes. If the upgrade fails
# for some reason, a new process is started.
upgrade)
if sig USR2 && echo "Upgrading $DESC" && sleep 10 \
&& sig 0 && oldsig QUIT
then
n=$TIMEOUT
while test -s $old_pid && test $n -ge 0
do
printf '.' && sleep 1 && n=$(( $n - 1 ))
done
echo
if test $n -lt 0 && test -s $old_pid
then
echo >&2 "$old_pid still exists after $TIMEOUT seconds"
exit 1
fi
exit 0
fi
echo >&2 "Couldn't upgrade, starting 'su - $USER -c \"$CMD\"' instead"
su - $USER -c "$CMD"
;;
# A basic status checker. Just checks if the master process is responding to
# the `kill` command.
status)
sig 0 && echo >&2 "$DESC is running." && exit 0
echo >&2 "$DESC is not running."
;;
# Reopen all logs owned by the master and all workers.
reopen-logs)
sig USR1
;;
# Any other action gets the usage message.
*)
# Usage
echo >&2 "Usage: $0 <start|stop|restart|reload|upgrade|force-stop|reopen-logs>"
exit 1
;;
esac
sudo apt-get install
upstream app {
# 这里的路径设置一定要和unicorn中配置的sock位置保持一致,不然nginx和unicorn将无法通信
server unix:/path/to/your/rails_app_path/tmp/unicorn.sock fail_timeout=0;
}
server {
listen 80; ## listen for ipv4; this line is default and implied
server_name localhost;
root /path/to/your/rails_app_path/public;
try_files $uri/index.html $uri @app;
# serve静态资源
location ^~/assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
location @app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
sudo ln -s ../sites-available/sitename sitename
http://wiki.ubuntu.org.cn/Ufw%E4%BD%BF%E7%94%A8%E6%8C%87%E5%8D%97
http://help.aliyun.com/knowledge_detail.htm?knowledgeId=5974154
http://www.gotealeaf.com/blog/setting-up-your-production-server-with-nginx-and-unicorn
http://vladigleba.com/blog/2014/03/27/deploying-rails-apps-part-4-configuring-nginx/
专门建一个配置表,用于存放这些用户配置数据。使用时,只要从数据库中读取即可。
优点: 扩展方便,当你需要增加一个配置量时,只要增加一条数据库记录即可。
缺点: 读取的速度偏慢,特别是当数据库和app不在同一台server上时,需要加上网络延时时间。
a、在config/下新建一个配置文件: app_config.yml
# 放置三个运行模式下都需要的配置
defaults: &DEFAULTS
# 示例数据
project_type:
pub: "公开项目"
pri: "私有项目"
# 放置在开发环境下的配置
development:
<<: *DEFAULTS
# 放置在测试环境下的配置
test:
<<: *DEFAULTS
# 放置在生产环境下的配置
production:
<<: *DEFAULTS
b、在app启动时,加载这个配置文件
新建文件 initialiers/load_config.rb, 内容示例如下:
APP_CONFIG = YAML.load_file("#{Rails.root}/config/app_config.yml")[Rails.env]
c、在项目代码中使用该配置数据
APP_CONFIG['project_type']['pub'] # => "公开项目"
优点: 获取配置数据快,直接在内存中读取,结构也比较清晰,各个环境下能够单独配置不同的数据
缺点: 一旦需要添加配置数据,需要服务重启
rails g controller admin/dashboard index
这个将会生成如下的目录结构:
controllers
|__admin
|__dashboard_controller.rb
同时在routes.rb文件中会生成如下的路由:
namespace :admin do
get 'dashboard/index'
end
代码内容示例如下:
class AdminController < ApplicationController
layout 'admin' # 如有必要,可以设置admin页面单独的layout
before_filter :require_admin
private
def require_admin
# admin 权限控制代码
end
end
同时,所有admin namespace下的controller都需要继承这个AdminController,比如在/admin/dashboard_controller.rb中,示例代码如下:
class Admin::DashboardController < AdminController
def index
end
end
ps: 如需要创建其他admin权限的页面,就可以仿照dashboard_controller.rb的创建方法,就可以轻松的开发一个属于自己的web后台管理系统