您好,登錄后才能下訂單哦!
這篇文章將為大家詳細講解有關如何解決java網絡爬蟲連接超時的問題,小編覺得挺實用的,因此分享給大家做個參考,希望大家閱讀完這篇文章后可以有所收獲。
具體如下。
在網絡爬蟲中,經常會遇到如下報錯。即連接超時。針對此問題,一般解決思路為:將連接時間、請求時間設置長一下。如果出現連接超時的情況,則在重新請求【設置重新請求次數】。
Exception in thread "main" java.net.ConnectException: Connection timed out: connect
下面的代碼便是使用httpclient解決連接超時的樣例程序。直接上程序。
package daili; import java.io.IOException; import java.net.URI; import org.apache.http.HttpRequest; import org.apache.http.HttpResponse; import org.apache.http.client.ClientProtocolException; import org.apache.http.client.methods.HttpGet; import org.apache.http.client.params.CookiePolicy; import org.apache.http.client.protocol.ClientContext; import org.apache.http.impl.client.BasicCookieStore; import org.apache.http.impl.client.CloseableHttpClient; import org.apache.http.impl.client.DefaultHttpClient; import org.apache.http.impl.client.DefaultHttpRequestRetryHandler; import org.apache.http.impl.client.HttpClients; import org.apache.http.impl.cookie.BasicClientCookie2; import org.apache.http.params.HttpConnectionParams; import org.apache.http.params.HttpParams; import org.apache.http.protocol.BasicHttpContext; import org.apache.http.protocol.ExecutionContext; import org.apache.http.protocol.HttpContext; import org.apache.http.util.EntityUtils; /* * author:合肥工業大學 管院學院 錢洋 *1563178220@qq.com */ public class Test1 { public static void main(String[] args) throws ClientProtocolException, IOException, InterruptedException { getRawHTML("http://club.autohome.com.cn/bbs/forum-c-2098-1.html#pvareaid=103447"); } public static String getRawHTML ( String url ) throws ClientProtocolException, IOException, InterruptedException{ //初始化 DefaultHttpClient httpclient = new DefaultHttpClient(); httpclient.getParams().setParameter("http.protocol.cookie-policy", CookiePolicy.BROWSER_COMPATIBILITY); //設置參數 HttpParams params = httpclient.getParams(); //連接時間 HttpConnectionParams.setConnectionTimeout(params, 6000); HttpConnectionParams.setSoTimeout(params, 6000*20); //超時重新請求次數 DefaultHttpRequestRetryHandler dhr = new DefaultHttpRequestRetryHandler(5,true); HttpContext localContext = new BasicHttpContext(); HttpRequest request2 = (HttpRequest) localContext.getAttribute( ExecutionContext.HTTP_REQUEST); httpclient.setHttpRequestRetryHandler(dhr); BasicCookieStore cookieStore = new BasicCookieStore(); BasicClientCookie2 cookie = new BasicClientCookie2("Content-Type","text/html;charset=UTF-8"); BasicClientCookie2 cookie1 = new BasicClientCookie2("User-Agent","Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36"); cookieStore.addCookie(cookie); cookieStore.addCookie(cookie1); localContext.setAttribute(ClientContext.COOKIE_STORE, cookieStore); HttpGet request = new HttpGet(); request.setURI(URI.create(url)); HttpResponse response = null; String rawHTML = ""; response = httpclient.execute(request,localContext); int StatusCode = response.getStatusLine().getStatusCode(); //獲取響應狀態碼 System.out.println(StatusCode); if(StatusCode == 200){ //狀態碼200表示響應成功 //獲取實體內容 rawHTML = EntityUtils.toString (response.getEntity()); System.out.println(rawHTML); //輸出實體內容 EntityUtils.consume(response.getEntity()); //消耗實體 } else { //關閉HttpEntity的流實體 EntityUtils.consume(response.getEntity()); //消耗實體 Thread.sleep(20*60*1000); //如果報錯先休息30分鐘 } httpclient.close(); System.out.println(rawHTML); return rawHTML; } }
結果:
關于“如何解決java網絡爬蟲連接超時的問題”這篇文章就分享到這里了,希望以上內容可以對大家有一定的幫助,使各位可以學到更多知識,如果覺得文章不錯,請把它分享出去讓更多的人看到。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。