As the title says, I am using CloudScraper to get CloudFlare secured website. I could get 200 response but the content is “You must enable javascript to view this page.” error message.
While trying to request the page through requests_html to render the JavaScript, I get 403 response with “CloudFlare | error cookies must be enabled” I already tried the different JavaScript interpreters in CloudScraper.
Is there a way to make the initial request using CloudScraper, and then pass it to requests_HTML to render the JavaScript?
Here is a sample of what I am trying to get
import cloudscraper
from bs4 import BeautifulSoup
from requests_html import HTMLSession
scraper = cloudscraper.create_scraper(browser={'browser': 'firefox','platform': 'windows','mobile': False})
html = scraper.get("https://visas-de.tlscontact.com/visa/eg/egCAI2de/home").content
session = HTMLSession()
#response = session.get(
'https://visas-de.tlscontact.com/visa/eg/egCAI2de/home')
rendered = response.html.arender(html)
soup = BeautifulSoup(rendered.text, 'html.parser')
So basically the main idea is to get the page through CloudScraper to bypass CloudFlare protection, then pass it to requests_html to render the javascript content.