solr与.net系列课程(6)solr定时增量索引与安全
solr增量索引的方式,就是一个Http请求,但是这样的请求显然不能满足要求,我们需要的是一个自动的增量索引,solr官方提供了一个定时器实例,来完成增量索引,
首先下载 apache-solr-dataimportscheduler-1.0.jar,下载地址:http://solr-dataimport-scheduler.googlecode.com/files/apache-solr-dataimportscheduler-1.0.jar
官方地址有时候访问不了,请点击这个http://pan.baidu.com/s/1pJt3KZD
下面开始配置
1.将apache-solr-dataimportscheduler-1.0.jar复制到C:\Program Files\Apache Software Foundation\Tomcat 7.0\webapps\solr\WEB-INF\lib (C:\Program Files\Apache Software Foundation\Tomcat 7.0为tomcat安装路径)
2.修改C:\Program Files\Apache Software Foundation\Tomcat 7.0\webapps\solr\WEB-INF下的web.xml文件, 在servlet节点前面增加
<listener-class>
org.apache.solr.handler.dataimport.scheduler.ApplicationListener
</listener-class>
</listener>
3.将apache-solr-dataimportscheduler-.jar 中 dataimport.properties 取出,放入C:\Program Files\Apache Software Foundation\Tomcat 7.0\solr\conf,没有conf新建一个
4.重启tomcat即可
dataimport.properties 配置项说明
# #
# dataimport scheduler properties #
# #
#################################################
# to sync or not to sync
# 1 - active; anything else - inactive
syncEnabled=1
# which cores to schedule
# in a multi-core environment you can decide which cores you want syncronized
# leave empty or comment it out if using single-core deployment
syncCores=game,resource
# solr server name or IP address
# [defaults to localhost if empty]
server=localhost
# solr server port
# [defaults to 80 if empty]
port=8080
# application name/context
# [defaults to current ServletContextListener's context (app) name]
webapp=solr
# URL params [mandatory]
# remainder of URL
params=/select?qt=/dataimport&command=delta-import&clean=false&commit=true
# schedule interval
# number of minutes between two runs
# [defaults to 30 if empty]
interval=1
# 重做索引的时间间隔,单位分钟,默认7200,即1天;
# 为空,为0,或者注释掉:表示永不重做索引
reBuildIndexInterval=2
# 重做索引的参数
reBuildIndexParams=/select?qt=/dataimport&command=full-import&clean=true&commit=true
# 重做索引时间间隔的计时开始时间,第一次真正执行的时间=reBuildIndexBeginTime+reBuildIndexInterval*60*1000;
# 两种格式:2012-04-11 03:10:00 或者 03:10:00,后一种会自动补全日期部分为服务启动时的日期
reBuildIndexBeginTime=03:10:00
以上是原文,#后面的是注释,我们来翻译一下
# #
# dataimport scheduler properties #
# #
#################################################
syncEnabled=1
#要定时的增量索引的核心,多核逗号隔开 collection1, collection2
syncCores= collection1
# 这个就不用说了,服务器地址
server=192.168.0.9
port=8080
webapp=solr
# 增量索引执行的命令
params=/dataimport?command=delta-import&clean=false&commit=true
#多长时间执行一次,默认单位分钟
interval=30
#下面的,是有人更改了该文件,新加的定时重建索引,原包是不带定时重建索引的,只有增量索引,官方的包是不支持下面三句话的,不需要可以删掉
reBuildIndexInterval=7200
reBuildIndexParams=/dataimport?command=full-import&clean=true&commit=true
reBuildIndexBeginTime=03:10:00
如果大家搜索其他的文章,会看见有人说官方提供的包有bug,因为官方是用post提交的,但我经过测试,官方的可以正常使用,以上的在本人的项目中可以正常使用.
如果想了解在原包中增加定时从建索引的,和该包中的bug的请参考下面的文章 http://www.denghuafeng.com/post-242.html
好了,上述工作完成后,你的solr就可以定时增量索引了,
下面我们来讲解一下solr安全性的问题
了解solr后,大家都知道了,solr是通过Http请求去执行所有操作的,那问题就来了,如果别人知道了你的solr服务器的地址就很危险了,solr的新增和删除也都是通过http请求来完成的,地址暴漏后,你的数据就容易受到攻击了.我这里的解决办法是,设置tomcat的访问权限,只有固定ip可以访问,这样别人就访问不了你的solr了
修改C:\Program Files\Apache Software Foundation\Tomcat 7.0\conf\server.xml,加入ip限制即可
全局设置,对Tomcat下所有应用生效
server.xml中添加下面一行,重启服务器即可:
<Valve className="org.apache.catalina.valves.RemoteAddrValve" allow="192.168.1.*" deny=""/> 此行放在</Host>之前。
例:
1,只允许192.168.1.10访问:
<Valve className="org.apache.catalina.valves.RemoteAddrValve"allow="192.168.1.10" deny=""/>
2,只允许192.168.1.*网段访问:<Valve className="org.apache.catalina.valves.RemoteAddrValve" allow="192.168.1.*" deny=""/>
3,只允许192.168.1.10、192.168.1.30访 <Valve className="org.apache.catalina.valves.RemoteAddrValve" allow="192.168.1.10,192.168.1.30" deny=""/>
4,根据主机名进行限制:
<Valve className="org.apache.catalina.valves.RemoteHostValve" allow="abc.com" deny=""/>
本文链接:solr与.net系列课程(六)solr定时增量索引与安全,转载请注明。
由于背景原因,所做的主从同步还是要基于MySQL 5.1的版本,主从同步主要是一个数据库读写访问原来的数据库热度过大,需要做到使用从库对读分压。
MySQL主从同步介绍
1.建立MySQL 账户
#groupadd mysql #useradd -s /sbin/nologin -g mysql -M mysql
#tail -l /etc/passwd
建立 MySQL 软件目录
#mkdir -p /home/tools
#cd /home/tools/
2.编译安装MySQL 软件(http://down1.chinaunix.net/distfiles/mysql-5.1.62.tar.gz)
#tar zxf mysql-5.1.62.tar.gz #cd mysql-5.1.62
配置
./configure \
--prefix=/usr/local/mysql \
--with-unix-socket-path=/usr/local/mysql/tmp/mysql.sock \
--localstatedir=/usr/local/mysql/data \
--enable-assembler \
--enable-thread-safe-client \
--with-mysqld-user=mysql \
--with-big-tables \
--without-debug \
--with-pthread \
--enable-assembler \
--with-extra-charsets=complex \
--with-ssl \
--with-embedded-server \
--enable-local-infile \
--with-plugins=partition,innobase \
--with-plugin-PLUGIN \
--with-mysqld-ldflags=-all-static \
--with-client-ldflags=-all-static
3.静态编译生成mysqld的执行文件
#make
4.安装MySQL
#make install
5.获取MySQL 配置文件
#ls -l support-files/*.cnf #cp support-files/my-small.cnf /etc/my.cnf
6.创建数据库文件
#mkdir -p /usr/local/mysql/data #chown -R mysql.mysql /usr/local/mysql
#/usr/local/mysql/bin/mysql_install_db --user=mysql
#
7.启动MySQL 数据库
#cp support-files/mysql.server /usr/local/mysql/bin #netstat -lnt|grep 3306
#/user/local/bin/mysql_safe --user=mysql &
8.配置MySQL 命令的全局使用路径
#echo 'export PATH=$PATH:/usr/local/mysql/bin' >>/etc/profile #source /etc/profile
9.配置/etc/init.d/mysqld start 方式启动数据库
#cp support-files/mysql.server /etc/init.d/mysqld #chmod 700 /etc/init.d/mysqld
#/etc/init.d/mysqld restart
多实例安装
1.采用不同的端口来作为二级目录
mkdir -p /data/{3306,3307}/data |
ls -l support-files/*.cnf /bin/cp support-files/my-small.cnf /etc/my.cnf |
vi /data/3306/my.cnf vi /data/3307/my.cnf |
[client] port = 3306 socket = /data/3306/mysql.sock [mysql] no-auto-rehash [mysqld] user = mysql port = 3306 socket = /data/3306/mysql.sock basedir = /usr/local/mysql datadir = /data/3306/data open_files_limit = 1024 back_log = 600 max_connections = 800 max_connect_errors = 3000 table_cache = 614 external-locking = FALSE max_allowed_packet =8M sort_buffer_size = 1M join_buffer_size = 1M thread_cache_size = 100 thread_concurrency = 2 query_cache_size = 2M query_cache_limit = 1M query_cache_min_res_unit = 2k default_table_type = InnoDB thread_stack = 192K transaction_isolation = READ-COMMITTED tmp_table_size = 2M max_heap_table_size = 2M long_query_time = 1 log_long_format log-error = /data/3306/error.log log-slow-queries = /data/3306/slow.log pid-file = /data/3306/mysql.pid log-bin = /data/3306/mysql-bin relay-log = /data/3306/relay-bin relay-log-info-file = /data/3306/relay-log.info binlog_cache_size = 1M max_binlog_cache_size = 1M max_binlog_size = 2M expire_logs_days = 7 key_buffer_size = 16M read_buffer_size = 1M read_rnd_buffer_size = 1M bulk_insert_buffer_size = 1M myisam_sort_buffer_size = 1M myisam_max_sort_file_size = 10G myisam_max_extra_sort_file_size = 10G myisam_repair_threads = 1 myisam_recover lower_case_table_names = 1 skip-name-resolve slave-skip-errors = 1032,1062 replicate-ignore-db=mysql server-id = 1 innodb_additional_mem_pool_size = 4M innodb_buffer_pool_size = 32M innodb_data_file_path = ibdata1:128M:autoextend innodb_file_io_threads = 4 innodb_thread_concurrency = 8 innodb_flush_log_at_trx_commit = 2 innodb_log_buffer_size = 2M innodb_log_file_size = 4M innodb_log_files_in_group = 3 innodb_max_dirty_pages_pct = 90 innodb_lock_wait_timeout = 120 innodb_file_per_table = 0 [mysqldump] quick max_allowed_packet = 2M [mysqld_safe] log-error=/data/3306/mysql_barry3306.err pid-file=/data/3306/mysqld.pid |
#!/bin/sh #/data/3306/mysql 脚本 #init port=3306 mysql_user="root" mysql_pwd="" CmdPath="/usr/local/mysql/bin" #startup function function_start_mysql() { printf "Starting MySQL...\n" /bin/sh ${CmdPath}/mysqld_safe --defaults-file=/data/${port}/my.cnf 2>&1 > /dev/null & } #stop function function_stop_mysql() { printf "Stoping MySQL...\n" ${CmdPath}/mysqladmin -u ${mysql_user} -p${mysql_pwd} -S /data/${port}/mysql.sock shutdown } #restart function function_restart_mysql() { printf "Restarting MySQL...\n" function_stop_mysql sleep 2 function_start_mysql } case $1 in start) function_start_mysql ;; stop) function_stop_mysql ;; restart) function_restart_mysql ;; *) printf "Usage: /data/${port}/mysql {start|stop|restart}\n" esac |
tree /data /data --3306 |--my.cnf |--mysql |--data --3307 |--my.cnf |--mysql |--data #授权
订阅:
博文评论 (Atom)
|
没有评论:
发表评论